00:00:00.001 Started by upstream project "autotest-per-patch" build number 131266 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.086 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.087 The recommended git tool is: git 00:00:00.087 using credential 00000000-0000-0000-0000-000000000002 00:00:00.089 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.145 Fetching changes from the remote Git repository 00:00:00.147 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.213 Using shallow fetch with depth 1 00:00:00.213 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.213 > git --version # timeout=10 00:00:00.265 > git --version # 'git version 2.39.2' 00:00:00.265 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.298 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.298 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.539 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.553 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.563 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:09.563 > git config core.sparsecheckout # timeout=10 00:00:09.574 > git read-tree -mu HEAD # timeout=10 00:00:09.589 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:09.606 Commit message: "packer: Fix typo in a package name" 00:00:09.606 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:09.721 [Pipeline] Start of Pipeline 00:00:09.735 [Pipeline] library 00:00:09.737 Loading library shm_lib@master 00:00:09.737 Library shm_lib@master is cached. Copying from home. 00:00:09.756 [Pipeline] node 00:00:09.764 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.765 [Pipeline] { 00:00:09.776 [Pipeline] catchError 00:00:09.777 [Pipeline] { 00:00:09.794 [Pipeline] wrap 00:00:09.804 [Pipeline] { 00:00:09.812 [Pipeline] stage 00:00:09.813 [Pipeline] { (Prologue) 00:00:10.025 [Pipeline] sh 00:00:10.307 + logger -p user.info -t JENKINS-CI 00:00:10.325 [Pipeline] echo 00:00:10.327 Node: GP6 00:00:10.336 [Pipeline] sh 00:00:10.640 [Pipeline] setCustomBuildProperty 00:00:10.653 [Pipeline] echo 00:00:10.654 Cleanup processes 00:00:10.660 [Pipeline] sh 00:00:10.947 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.947 2152183 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.961 [Pipeline] sh 00:00:11.247 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.247 ++ grep -v 'sudo pgrep' 00:00:11.247 ++ awk '{print $1}' 00:00:11.247 + sudo kill -9 00:00:11.247 + true 00:00:11.263 [Pipeline] cleanWs 00:00:11.273 [WS-CLEANUP] Deleting project workspace... 00:00:11.273 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.279 [WS-CLEANUP] done 00:00:11.283 [Pipeline] setCustomBuildProperty 00:00:11.294 [Pipeline] sh 00:00:11.577 + sudo git config --global --replace-all safe.directory '*' 00:00:11.676 [Pipeline] httpRequest 00:00:12.061 [Pipeline] echo 00:00:12.063 Sorcerer 10.211.164.101 is alive 00:00:12.073 [Pipeline] retry 00:00:12.075 [Pipeline] { 00:00:12.090 [Pipeline] httpRequest 00:00:12.095 HttpMethod: GET 00:00:12.095 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:12.095 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:12.098 Response Code: HTTP/1.1 200 OK 00:00:12.098 Success: Status code 200 is in the accepted range: 200,404 00:00:12.099 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:13.208 [Pipeline] } 00:00:13.226 [Pipeline] // retry 00:00:13.232 [Pipeline] sh 00:00:13.519 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:13.537 [Pipeline] httpRequest 00:00:13.944 [Pipeline] echo 00:00:13.946 Sorcerer 10.211.164.101 is alive 00:00:13.958 [Pipeline] retry 00:00:13.961 [Pipeline] { 00:00:13.978 [Pipeline] httpRequest 00:00:13.982 HttpMethod: GET 00:00:13.983 URL: http://10.211.164.101/packages/spdk_767a69c7cd74914b4993c9b527f5091dacedc7ff.tar.gz 00:00:13.983 Sending request to url: http://10.211.164.101/packages/spdk_767a69c7cd74914b4993c9b527f5091dacedc7ff.tar.gz 00:00:14.010 Response Code: HTTP/1.1 200 OK 00:00:14.010 Success: Status code 200 is in the accepted range: 200,404 00:00:14.010 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_767a69c7cd74914b4993c9b527f5091dacedc7ff.tar.gz 00:02:12.218 [Pipeline] } 00:02:12.235 [Pipeline] // retry 00:02:12.243 [Pipeline] sh 00:02:12.533 + tar --no-same-owner -xf spdk_767a69c7cd74914b4993c9b527f5091dacedc7ff.tar.gz 00:02:15.081 [Pipeline] sh 00:02:15.372 + git -C spdk log --oneline -n5 00:02:15.372 767a69c7c nvme/rdma: Support accel sequence 00:02:15.372 4fbf6d88a lib/rdma_provider: Add API to check if accel seq supported 00:02:15.372 759f895c3 lib/mlx5: Add API to check if UMR registration supported 00:02:15.372 95baf53d7 accel/mlx5: Merge crypto+copy to reg UMR 00:02:15.372 278f2f65a accel/mlx5: Initial implementation of mlx5 platform driver 00:02:15.384 [Pipeline] } 00:02:15.399 [Pipeline] // stage 00:02:15.409 [Pipeline] stage 00:02:15.411 [Pipeline] { (Prepare) 00:02:15.430 [Pipeline] writeFile 00:02:15.447 [Pipeline] sh 00:02:15.785 + logger -p user.info -t JENKINS-CI 00:02:15.800 [Pipeline] sh 00:02:16.090 + logger -p user.info -t JENKINS-CI 00:02:16.104 [Pipeline] sh 00:02:16.392 + cat autorun-spdk.conf 00:02:16.392 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.392 SPDK_TEST_NVMF=1 00:02:16.392 SPDK_TEST_NVME_CLI=1 00:02:16.392 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.392 SPDK_TEST_NVMF_NICS=e810 00:02:16.392 SPDK_TEST_VFIOUSER=1 00:02:16.392 SPDK_RUN_UBSAN=1 00:02:16.392 NET_TYPE=phy 00:02:16.401 RUN_NIGHTLY=0 00:02:16.406 [Pipeline] readFile 00:02:16.434 [Pipeline] withEnv 00:02:16.436 [Pipeline] { 00:02:16.451 [Pipeline] sh 00:02:16.765 + set -ex 00:02:16.765 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:16.765 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:16.765 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.765 ++ SPDK_TEST_NVMF=1 00:02:16.765 ++ SPDK_TEST_NVME_CLI=1 00:02:16.765 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.765 ++ SPDK_TEST_NVMF_NICS=e810 00:02:16.765 ++ SPDK_TEST_VFIOUSER=1 00:02:16.765 ++ SPDK_RUN_UBSAN=1 00:02:16.765 ++ NET_TYPE=phy 00:02:16.765 ++ RUN_NIGHTLY=0 00:02:16.765 + case $SPDK_TEST_NVMF_NICS in 00:02:16.765 + DRIVERS=ice 00:02:16.765 + [[ tcp == \r\d\m\a ]] 00:02:16.765 + [[ -n ice ]] 00:02:16.765 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:16.765 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:16.765 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:16.765 rmmod: ERROR: Module irdma is not currently loaded 00:02:16.765 rmmod: ERROR: Module i40iw is not currently loaded 00:02:16.765 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:16.765 + true 00:02:16.765 + for D in $DRIVERS 00:02:16.765 + sudo modprobe ice 00:02:16.765 + exit 0 00:02:16.775 [Pipeline] } 00:02:16.788 [Pipeline] // withEnv 00:02:16.793 [Pipeline] } 00:02:16.808 [Pipeline] // stage 00:02:16.818 [Pipeline] catchError 00:02:16.820 [Pipeline] { 00:02:16.834 [Pipeline] timeout 00:02:16.834 Timeout set to expire in 1 hr 0 min 00:02:16.836 [Pipeline] { 00:02:16.849 [Pipeline] stage 00:02:16.851 [Pipeline] { (Tests) 00:02:16.865 [Pipeline] sh 00:02:17.152 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:17.152 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:17.152 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:17.152 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:17.152 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:17.152 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:17.152 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:17.152 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:17.152 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:17.152 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:17.152 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:17.152 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:17.152 + source /etc/os-release 00:02:17.152 ++ NAME='Fedora Linux' 00:02:17.152 ++ VERSION='39 (Cloud Edition)' 00:02:17.152 ++ ID=fedora 00:02:17.152 ++ VERSION_ID=39 00:02:17.152 ++ VERSION_CODENAME= 00:02:17.152 ++ PLATFORM_ID=platform:f39 00:02:17.152 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:17.152 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:17.152 ++ LOGO=fedora-logo-icon 00:02:17.152 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:17.152 ++ HOME_URL=https://fedoraproject.org/ 00:02:17.152 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:17.152 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:17.152 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:17.152 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:17.152 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:17.152 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:17.152 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:17.152 ++ SUPPORT_END=2024-11-12 00:02:17.152 ++ VARIANT='Cloud Edition' 00:02:17.152 ++ VARIANT_ID=cloud 00:02:17.152 + uname -a 00:02:17.153 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:17.153 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:18.530 Hugepages 00:02:18.530 node hugesize free / total 00:02:18.530 node0 1048576kB 0 / 0 00:02:18.530 node0 2048kB 0 / 0 00:02:18.530 node1 1048576kB 0 / 0 00:02:18.530 node1 2048kB 0 / 0 00:02:18.530 00:02:18.530 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:18.530 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:18.530 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:18.530 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:18.530 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:18.530 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:18.530 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:18.530 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:18.530 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:18.530 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:18.530 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:18.530 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:18.530 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:18.530 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:18.530 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:18.530 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:18.530 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:18.530 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:18.530 + rm -f /tmp/spdk-ld-path 00:02:18.530 + source autorun-spdk.conf 00:02:18.530 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.530 ++ SPDK_TEST_NVMF=1 00:02:18.530 ++ SPDK_TEST_NVME_CLI=1 00:02:18.530 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.530 ++ SPDK_TEST_NVMF_NICS=e810 00:02:18.530 ++ SPDK_TEST_VFIOUSER=1 00:02:18.530 ++ SPDK_RUN_UBSAN=1 00:02:18.530 ++ NET_TYPE=phy 00:02:18.530 ++ RUN_NIGHTLY=0 00:02:18.530 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:18.530 + [[ -n '' ]] 00:02:18.530 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.530 + for M in /var/spdk/build-*-manifest.txt 00:02:18.530 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:18.530 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:18.530 + for M in /var/spdk/build-*-manifest.txt 00:02:18.530 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:18.530 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:18.530 + for M in /var/spdk/build-*-manifest.txt 00:02:18.530 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:18.530 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:18.530 ++ uname 00:02:18.530 + [[ Linux == \L\i\n\u\x ]] 00:02:18.530 + sudo dmesg -T 00:02:18.530 + sudo dmesg --clear 00:02:18.530 + dmesg_pid=2153502 00:02:18.530 + [[ Fedora Linux == FreeBSD ]] 00:02:18.530 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:18.530 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:18.530 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:18.530 + sudo dmesg -Tw 00:02:18.530 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:18.530 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:18.530 + [[ -x /usr/src/fio-static/fio ]] 00:02:18.530 + export FIO_BIN=/usr/src/fio-static/fio 00:02:18.530 + FIO_BIN=/usr/src/fio-static/fio 00:02:18.530 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:18.530 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:18.530 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:18.530 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:18.530 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:18.530 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:18.530 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:18.530 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:18.530 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:18.530 Test configuration: 00:02:18.530 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.530 SPDK_TEST_NVMF=1 00:02:18.530 SPDK_TEST_NVME_CLI=1 00:02:18.530 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.530 SPDK_TEST_NVMF_NICS=e810 00:02:18.530 SPDK_TEST_VFIOUSER=1 00:02:18.530 SPDK_RUN_UBSAN=1 00:02:18.530 NET_TYPE=phy 00:02:18.530 RUN_NIGHTLY=0 16:30:32 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:18.530 16:30:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:18.530 16:30:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:18.530 16:30:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:18.530 16:30:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:18.530 16:30:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:18.530 16:30:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.530 16:30:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.530 16:30:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.530 16:30:32 -- paths/export.sh@5 -- $ export PATH 00:02:18.530 16:30:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.530 16:30:32 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:18.530 16:30:32 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:18.530 16:30:32 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729175432.XXXXXX 00:02:18.530 16:30:32 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729175432.UsNxqh 00:02:18.530 16:30:32 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:18.530 16:30:32 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:18.530 16:30:32 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:18.530 16:30:32 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:18.530 16:30:32 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:18.530 16:30:32 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:18.530 16:30:32 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:18.530 16:30:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.530 16:30:32 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:18.531 16:30:32 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:18.531 16:30:32 -- pm/common@17 -- $ local monitor 00:02:18.531 16:30:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.531 16:30:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.531 16:30:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.531 16:30:32 -- pm/common@21 -- $ date +%s 00:02:18.531 16:30:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.531 16:30:32 -- pm/common@21 -- $ date +%s 00:02:18.531 16:30:32 -- pm/common@25 -- $ sleep 1 00:02:18.531 16:30:32 -- pm/common@21 -- $ date +%s 00:02:18.531 16:30:32 -- pm/common@21 -- $ date +%s 00:02:18.531 16:30:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729175432 00:02:18.531 16:30:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729175432 00:02:18.531 16:30:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729175432 00:02:18.531 16:30:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729175432 00:02:18.531 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729175432_collect-cpu-load.pm.log 00:02:18.531 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729175432_collect-vmstat.pm.log 00:02:18.531 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729175432_collect-cpu-temp.pm.log 00:02:18.531 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729175432_collect-bmc-pm.bmc.pm.log 00:02:19.471 16:30:33 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:19.471 16:30:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:19.471 16:30:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:19.471 16:30:33 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.471 16:30:33 -- spdk/autobuild.sh@16 -- $ date -u 00:02:19.471 Thu Oct 17 02:30:33 PM UTC 2024 00:02:19.471 16:30:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:19.471 v25.01-pre-84-g767a69c7c 00:02:19.471 16:30:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:19.471 16:30:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:19.471 16:30:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:19.471 16:30:33 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:19.471 16:30:33 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:19.471 16:30:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.471 ************************************ 00:02:19.471 START TEST ubsan 00:02:19.471 ************************************ 00:02:19.471 16:30:33 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:19.471 using ubsan 00:02:19.471 00:02:19.471 real 0m0.000s 00:02:19.471 user 0m0.000s 00:02:19.471 sys 0m0.000s 00:02:19.471 16:30:33 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:19.471 16:30:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:19.471 ************************************ 00:02:19.471 END TEST ubsan 00:02:19.471 ************************************ 00:02:19.730 16:30:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:19.730 16:30:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:19.730 16:30:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:19.730 16:30:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:19.730 16:30:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:19.730 16:30:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:19.730 16:30:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:19.730 16:30:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:19.730 16:30:33 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:19.730 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:19.730 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:19.989 Using 'verbs' RDMA provider 00:02:30.546 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:40.541 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:40.541 Creating mk/config.mk...done. 00:02:40.541 Creating mk/cc.flags.mk...done. 00:02:40.541 Type 'make' to build. 00:02:40.541 16:30:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:02:40.541 16:30:53 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:40.541 16:30:53 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:40.541 16:30:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.541 ************************************ 00:02:40.541 START TEST make 00:02:40.541 ************************************ 00:02:40.542 16:30:53 make -- common/autotest_common.sh@1125 -- $ make -j48 00:02:40.542 make[1]: Nothing to be done for 'all'. 00:02:42.463 The Meson build system 00:02:42.463 Version: 1.5.0 00:02:42.463 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:42.463 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:42.463 Build type: native build 00:02:42.463 Project name: libvfio-user 00:02:42.463 Project version: 0.0.1 00:02:42.463 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.463 C linker for the host machine: cc ld.bfd 2.40-14 00:02:42.463 Host machine cpu family: x86_64 00:02:42.463 Host machine cpu: x86_64 00:02:42.463 Run-time dependency threads found: YES 00:02:42.463 Library dl found: YES 00:02:42.463 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.463 Run-time dependency json-c found: YES 0.17 00:02:42.463 Run-time dependency cmocka found: YES 1.1.7 00:02:42.463 Program pytest-3 found: NO 00:02:42.463 Program flake8 found: NO 00:02:42.463 Program misspell-fixer found: NO 00:02:42.463 Program restructuredtext-lint found: NO 00:02:42.463 Program valgrind found: YES (/usr/bin/valgrind) 00:02:42.463 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.463 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.463 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.463 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:42.463 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:42.463 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:42.463 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:42.463 Build targets in project: 8 00:02:42.463 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:42.463 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:42.463 00:02:42.463 libvfio-user 0.0.1 00:02:42.463 00:02:42.463 User defined options 00:02:42.463 buildtype : debug 00:02:42.463 default_library: shared 00:02:42.463 libdir : /usr/local/lib 00:02:42.463 00:02:42.463 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.414 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:43.414 [1/37] Compiling C object samples/null.p/null.c.o 00:02:43.414 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:43.414 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:43.414 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:43.414 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:43.414 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:43.414 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:43.414 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:43.414 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:43.414 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:43.414 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:43.414 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:43.414 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:43.414 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:43.414 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:43.414 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:43.414 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:43.414 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:43.414 [19/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:43.414 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:43.680 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:43.680 [22/37] Compiling C object samples/server.p/server.c.o 00:02:43.680 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:43.680 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:43.680 [25/37] Compiling C object samples/client.p/client.c.o 00:02:43.680 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:43.680 [27/37] Linking target samples/client 00:02:43.680 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:43.680 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:43.680 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:43.959 [31/37] Linking target test/unit_tests 00:02:43.959 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:43.959 [33/37] Linking target samples/null 00:02:43.959 [34/37] Linking target samples/server 00:02:43.959 [35/37] Linking target samples/gpio-pci-idio-16 00:02:43.959 [36/37] Linking target samples/lspci 00:02:43.959 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:44.221 INFO: autodetecting backend as ninja 00:02:44.221 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:44.221 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:45.164 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:45.164 ninja: no work to do. 00:02:49.354 The Meson build system 00:02:49.354 Version: 1.5.0 00:02:49.354 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:49.354 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:49.354 Build type: native build 00:02:49.354 Program cat found: YES (/usr/bin/cat) 00:02:49.354 Project name: DPDK 00:02:49.354 Project version: 24.03.0 00:02:49.354 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:49.354 C linker for the host machine: cc ld.bfd 2.40-14 00:02:49.354 Host machine cpu family: x86_64 00:02:49.354 Host machine cpu: x86_64 00:02:49.354 Message: ## Building in Developer Mode ## 00:02:49.354 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:49.354 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:49.354 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:49.354 Program python3 found: YES (/usr/bin/python3) 00:02:49.354 Program cat found: YES (/usr/bin/cat) 00:02:49.354 Compiler for C supports arguments -march=native: YES 00:02:49.354 Checking for size of "void *" : 8 00:02:49.354 Checking for size of "void *" : 8 (cached) 00:02:49.354 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:49.354 Library m found: YES 00:02:49.354 Library numa found: YES 00:02:49.354 Has header "numaif.h" : YES 00:02:49.354 Library fdt found: NO 00:02:49.354 Library execinfo found: NO 00:02:49.354 Has header "execinfo.h" : YES 00:02:49.354 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:49.354 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:49.354 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:49.354 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:49.354 Run-time dependency openssl found: YES 3.1.1 00:02:49.354 Run-time dependency libpcap found: YES 1.10.4 00:02:49.354 Has header "pcap.h" with dependency libpcap: YES 00:02:49.354 Compiler for C supports arguments -Wcast-qual: YES 00:02:49.354 Compiler for C supports arguments -Wdeprecated: YES 00:02:49.354 Compiler for C supports arguments -Wformat: YES 00:02:49.354 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:49.354 Compiler for C supports arguments -Wformat-security: NO 00:02:49.354 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:49.354 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:49.354 Compiler for C supports arguments -Wnested-externs: YES 00:02:49.354 Compiler for C supports arguments -Wold-style-definition: YES 00:02:49.354 Compiler for C supports arguments -Wpointer-arith: YES 00:02:49.354 Compiler for C supports arguments -Wsign-compare: YES 00:02:49.354 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:49.354 Compiler for C supports arguments -Wundef: YES 00:02:49.355 Compiler for C supports arguments -Wwrite-strings: YES 00:02:49.355 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:49.355 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:49.355 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:49.355 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:49.355 Program objdump found: YES (/usr/bin/objdump) 00:02:49.355 Compiler for C supports arguments -mavx512f: YES 00:02:49.355 Checking if "AVX512 checking" compiles: YES 00:02:49.355 Fetching value of define "__SSE4_2__" : 1 00:02:49.355 Fetching value of define "__AES__" : 1 00:02:49.355 Fetching value of define "__AVX__" : 1 00:02:49.355 Fetching value of define "__AVX2__" : (undefined) 00:02:49.355 Fetching value of define "__AVX512BW__" : (undefined) 00:02:49.355 Fetching value of define "__AVX512CD__" : (undefined) 00:02:49.355 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:49.355 Fetching value of define "__AVX512F__" : (undefined) 00:02:49.355 Fetching value of define "__AVX512VL__" : (undefined) 00:02:49.355 Fetching value of define "__PCLMUL__" : 1 00:02:49.355 Fetching value of define "__RDRND__" : 1 00:02:49.355 Fetching value of define "__RDSEED__" : (undefined) 00:02:49.355 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:49.355 Fetching value of define "__znver1__" : (undefined) 00:02:49.355 Fetching value of define "__znver2__" : (undefined) 00:02:49.355 Fetching value of define "__znver3__" : (undefined) 00:02:49.355 Fetching value of define "__znver4__" : (undefined) 00:02:49.355 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:49.355 Message: lib/log: Defining dependency "log" 00:02:49.355 Message: lib/kvargs: Defining dependency "kvargs" 00:02:49.355 Message: lib/telemetry: Defining dependency "telemetry" 00:02:49.355 Checking for function "getentropy" : NO 00:02:49.355 Message: lib/eal: Defining dependency "eal" 00:02:49.355 Message: lib/ring: Defining dependency "ring" 00:02:49.355 Message: lib/rcu: Defining dependency "rcu" 00:02:49.355 Message: lib/mempool: Defining dependency "mempool" 00:02:49.355 Message: lib/mbuf: Defining dependency "mbuf" 00:02:49.355 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:49.355 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:49.355 Compiler for C supports arguments -mpclmul: YES 00:02:49.355 Compiler for C supports arguments -maes: YES 00:02:49.355 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:49.355 Compiler for C supports arguments -mavx512bw: YES 00:02:49.355 Compiler for C supports arguments -mavx512dq: YES 00:02:49.355 Compiler for C supports arguments -mavx512vl: YES 00:02:49.355 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:49.355 Compiler for C supports arguments -mavx2: YES 00:02:49.355 Compiler for C supports arguments -mavx: YES 00:02:49.355 Message: lib/net: Defining dependency "net" 00:02:49.355 Message: lib/meter: Defining dependency "meter" 00:02:49.355 Message: lib/ethdev: Defining dependency "ethdev" 00:02:49.355 Message: lib/pci: Defining dependency "pci" 00:02:49.355 Message: lib/cmdline: Defining dependency "cmdline" 00:02:49.355 Message: lib/hash: Defining dependency "hash" 00:02:49.355 Message: lib/timer: Defining dependency "timer" 00:02:49.355 Message: lib/compressdev: Defining dependency "compressdev" 00:02:49.355 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:49.355 Message: lib/dmadev: Defining dependency "dmadev" 00:02:49.355 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:49.355 Message: lib/power: Defining dependency "power" 00:02:49.355 Message: lib/reorder: Defining dependency "reorder" 00:02:49.355 Message: lib/security: Defining dependency "security" 00:02:49.355 Has header "linux/userfaultfd.h" : YES 00:02:49.355 Has header "linux/vduse.h" : YES 00:02:49.355 Message: lib/vhost: Defining dependency "vhost" 00:02:49.355 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:49.355 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:49.355 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:49.355 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:49.355 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:49.355 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:49.355 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:49.355 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:49.355 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:49.355 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:49.355 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:49.355 Configuring doxy-api-html.conf using configuration 00:02:49.355 Configuring doxy-api-man.conf using configuration 00:02:49.355 Program mandb found: YES (/usr/bin/mandb) 00:02:49.355 Program sphinx-build found: NO 00:02:49.355 Configuring rte_build_config.h using configuration 00:02:49.355 Message: 00:02:49.355 ================= 00:02:49.355 Applications Enabled 00:02:49.355 ================= 00:02:49.355 00:02:49.355 apps: 00:02:49.355 00:02:49.355 00:02:49.355 Message: 00:02:49.355 ================= 00:02:49.355 Libraries Enabled 00:02:49.355 ================= 00:02:49.355 00:02:49.355 libs: 00:02:49.355 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:49.355 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:49.355 cryptodev, dmadev, power, reorder, security, vhost, 00:02:49.355 00:02:49.355 Message: 00:02:49.355 =============== 00:02:49.355 Drivers Enabled 00:02:49.355 =============== 00:02:49.355 00:02:49.355 common: 00:02:49.355 00:02:49.355 bus: 00:02:49.355 pci, vdev, 00:02:49.355 mempool: 00:02:49.355 ring, 00:02:49.355 dma: 00:02:49.355 00:02:49.355 net: 00:02:49.355 00:02:49.355 crypto: 00:02:49.355 00:02:49.355 compress: 00:02:49.355 00:02:49.355 vdpa: 00:02:49.355 00:02:49.355 00:02:49.355 Message: 00:02:49.355 ================= 00:02:49.355 Content Skipped 00:02:49.355 ================= 00:02:49.355 00:02:49.355 apps: 00:02:49.355 dumpcap: explicitly disabled via build config 00:02:49.355 graph: explicitly disabled via build config 00:02:49.355 pdump: explicitly disabled via build config 00:02:49.355 proc-info: explicitly disabled via build config 00:02:49.355 test-acl: explicitly disabled via build config 00:02:49.355 test-bbdev: explicitly disabled via build config 00:02:49.355 test-cmdline: explicitly disabled via build config 00:02:49.355 test-compress-perf: explicitly disabled via build config 00:02:49.355 test-crypto-perf: explicitly disabled via build config 00:02:49.355 test-dma-perf: explicitly disabled via build config 00:02:49.355 test-eventdev: explicitly disabled via build config 00:02:49.355 test-fib: explicitly disabled via build config 00:02:49.355 test-flow-perf: explicitly disabled via build config 00:02:49.355 test-gpudev: explicitly disabled via build config 00:02:49.355 test-mldev: explicitly disabled via build config 00:02:49.355 test-pipeline: explicitly disabled via build config 00:02:49.355 test-pmd: explicitly disabled via build config 00:02:49.355 test-regex: explicitly disabled via build config 00:02:49.355 test-sad: explicitly disabled via build config 00:02:49.355 test-security-perf: explicitly disabled via build config 00:02:49.355 00:02:49.355 libs: 00:02:49.355 argparse: explicitly disabled via build config 00:02:49.355 metrics: explicitly disabled via build config 00:02:49.355 acl: explicitly disabled via build config 00:02:49.355 bbdev: explicitly disabled via build config 00:02:49.355 bitratestats: explicitly disabled via build config 00:02:49.355 bpf: explicitly disabled via build config 00:02:49.355 cfgfile: explicitly disabled via build config 00:02:49.355 distributor: explicitly disabled via build config 00:02:49.355 efd: explicitly disabled via build config 00:02:49.355 eventdev: explicitly disabled via build config 00:02:49.355 dispatcher: explicitly disabled via build config 00:02:49.355 gpudev: explicitly disabled via build config 00:02:49.355 gro: explicitly disabled via build config 00:02:49.355 gso: explicitly disabled via build config 00:02:49.355 ip_frag: explicitly disabled via build config 00:02:49.355 jobstats: explicitly disabled via build config 00:02:49.355 latencystats: explicitly disabled via build config 00:02:49.355 lpm: explicitly disabled via build config 00:02:49.355 member: explicitly disabled via build config 00:02:49.355 pcapng: explicitly disabled via build config 00:02:49.355 rawdev: explicitly disabled via build config 00:02:49.355 regexdev: explicitly disabled via build config 00:02:49.355 mldev: explicitly disabled via build config 00:02:49.355 rib: explicitly disabled via build config 00:02:49.355 sched: explicitly disabled via build config 00:02:49.355 stack: explicitly disabled via build config 00:02:49.355 ipsec: explicitly disabled via build config 00:02:49.355 pdcp: explicitly disabled via build config 00:02:49.355 fib: explicitly disabled via build config 00:02:49.355 port: explicitly disabled via build config 00:02:49.355 pdump: explicitly disabled via build config 00:02:49.355 table: explicitly disabled via build config 00:02:49.355 pipeline: explicitly disabled via build config 00:02:49.355 graph: explicitly disabled via build config 00:02:49.355 node: explicitly disabled via build config 00:02:49.355 00:02:49.355 drivers: 00:02:49.355 common/cpt: not in enabled drivers build config 00:02:49.355 common/dpaax: not in enabled drivers build config 00:02:49.355 common/iavf: not in enabled drivers build config 00:02:49.355 common/idpf: not in enabled drivers build config 00:02:49.355 common/ionic: not in enabled drivers build config 00:02:49.355 common/mvep: not in enabled drivers build config 00:02:49.355 common/octeontx: not in enabled drivers build config 00:02:49.355 bus/auxiliary: not in enabled drivers build config 00:02:49.355 bus/cdx: not in enabled drivers build config 00:02:49.355 bus/dpaa: not in enabled drivers build config 00:02:49.355 bus/fslmc: not in enabled drivers build config 00:02:49.355 bus/ifpga: not in enabled drivers build config 00:02:49.355 bus/platform: not in enabled drivers build config 00:02:49.355 bus/uacce: not in enabled drivers build config 00:02:49.355 bus/vmbus: not in enabled drivers build config 00:02:49.355 common/cnxk: not in enabled drivers build config 00:02:49.355 common/mlx5: not in enabled drivers build config 00:02:49.355 common/nfp: not in enabled drivers build config 00:02:49.355 common/nitrox: not in enabled drivers build config 00:02:49.355 common/qat: not in enabled drivers build config 00:02:49.355 common/sfc_efx: not in enabled drivers build config 00:02:49.355 mempool/bucket: not in enabled drivers build config 00:02:49.355 mempool/cnxk: not in enabled drivers build config 00:02:49.355 mempool/dpaa: not in enabled drivers build config 00:02:49.355 mempool/dpaa2: not in enabled drivers build config 00:02:49.355 mempool/octeontx: not in enabled drivers build config 00:02:49.355 mempool/stack: not in enabled drivers build config 00:02:49.355 dma/cnxk: not in enabled drivers build config 00:02:49.355 dma/dpaa: not in enabled drivers build config 00:02:49.356 dma/dpaa2: not in enabled drivers build config 00:02:49.356 dma/hisilicon: not in enabled drivers build config 00:02:49.356 dma/idxd: not in enabled drivers build config 00:02:49.356 dma/ioat: not in enabled drivers build config 00:02:49.356 dma/skeleton: not in enabled drivers build config 00:02:49.356 net/af_packet: not in enabled drivers build config 00:02:49.356 net/af_xdp: not in enabled drivers build config 00:02:49.356 net/ark: not in enabled drivers build config 00:02:49.356 net/atlantic: not in enabled drivers build config 00:02:49.356 net/avp: not in enabled drivers build config 00:02:49.356 net/axgbe: not in enabled drivers build config 00:02:49.356 net/bnx2x: not in enabled drivers build config 00:02:49.356 net/bnxt: not in enabled drivers build config 00:02:49.356 net/bonding: not in enabled drivers build config 00:02:49.356 net/cnxk: not in enabled drivers build config 00:02:49.356 net/cpfl: not in enabled drivers build config 00:02:49.356 net/cxgbe: not in enabled drivers build config 00:02:49.356 net/dpaa: not in enabled drivers build config 00:02:49.356 net/dpaa2: not in enabled drivers build config 00:02:49.356 net/e1000: not in enabled drivers build config 00:02:49.356 net/ena: not in enabled drivers build config 00:02:49.356 net/enetc: not in enabled drivers build config 00:02:49.356 net/enetfec: not in enabled drivers build config 00:02:49.356 net/enic: not in enabled drivers build config 00:02:49.356 net/failsafe: not in enabled drivers build config 00:02:49.356 net/fm10k: not in enabled drivers build config 00:02:49.356 net/gve: not in enabled drivers build config 00:02:49.356 net/hinic: not in enabled drivers build config 00:02:49.356 net/hns3: not in enabled drivers build config 00:02:49.356 net/i40e: not in enabled drivers build config 00:02:49.356 net/iavf: not in enabled drivers build config 00:02:49.356 net/ice: not in enabled drivers build config 00:02:49.356 net/idpf: not in enabled drivers build config 00:02:49.356 net/igc: not in enabled drivers build config 00:02:49.356 net/ionic: not in enabled drivers build config 00:02:49.356 net/ipn3ke: not in enabled drivers build config 00:02:49.356 net/ixgbe: not in enabled drivers build config 00:02:49.356 net/mana: not in enabled drivers build config 00:02:49.356 net/memif: not in enabled drivers build config 00:02:49.356 net/mlx4: not in enabled drivers build config 00:02:49.356 net/mlx5: not in enabled drivers build config 00:02:49.356 net/mvneta: not in enabled drivers build config 00:02:49.356 net/mvpp2: not in enabled drivers build config 00:02:49.356 net/netvsc: not in enabled drivers build config 00:02:49.356 net/nfb: not in enabled drivers build config 00:02:49.356 net/nfp: not in enabled drivers build config 00:02:49.356 net/ngbe: not in enabled drivers build config 00:02:49.356 net/null: not in enabled drivers build config 00:02:49.356 net/octeontx: not in enabled drivers build config 00:02:49.356 net/octeon_ep: not in enabled drivers build config 00:02:49.356 net/pcap: not in enabled drivers build config 00:02:49.356 net/pfe: not in enabled drivers build config 00:02:49.356 net/qede: not in enabled drivers build config 00:02:49.356 net/ring: not in enabled drivers build config 00:02:49.356 net/sfc: not in enabled drivers build config 00:02:49.356 net/softnic: not in enabled drivers build config 00:02:49.356 net/tap: not in enabled drivers build config 00:02:49.356 net/thunderx: not in enabled drivers build config 00:02:49.356 net/txgbe: not in enabled drivers build config 00:02:49.356 net/vdev_netvsc: not in enabled drivers build config 00:02:49.356 net/vhost: not in enabled drivers build config 00:02:49.356 net/virtio: not in enabled drivers build config 00:02:49.356 net/vmxnet3: not in enabled drivers build config 00:02:49.356 raw/*: missing internal dependency, "rawdev" 00:02:49.356 crypto/armv8: not in enabled drivers build config 00:02:49.356 crypto/bcmfs: not in enabled drivers build config 00:02:49.356 crypto/caam_jr: not in enabled drivers build config 00:02:49.356 crypto/ccp: not in enabled drivers build config 00:02:49.356 crypto/cnxk: not in enabled drivers build config 00:02:49.356 crypto/dpaa_sec: not in enabled drivers build config 00:02:49.356 crypto/dpaa2_sec: not in enabled drivers build config 00:02:49.356 crypto/ipsec_mb: not in enabled drivers build config 00:02:49.356 crypto/mlx5: not in enabled drivers build config 00:02:49.356 crypto/mvsam: not in enabled drivers build config 00:02:49.356 crypto/nitrox: not in enabled drivers build config 00:02:49.356 crypto/null: not in enabled drivers build config 00:02:49.356 crypto/octeontx: not in enabled drivers build config 00:02:49.356 crypto/openssl: not in enabled drivers build config 00:02:49.356 crypto/scheduler: not in enabled drivers build config 00:02:49.356 crypto/uadk: not in enabled drivers build config 00:02:49.356 crypto/virtio: not in enabled drivers build config 00:02:49.356 compress/isal: not in enabled drivers build config 00:02:49.356 compress/mlx5: not in enabled drivers build config 00:02:49.356 compress/nitrox: not in enabled drivers build config 00:02:49.356 compress/octeontx: not in enabled drivers build config 00:02:49.356 compress/zlib: not in enabled drivers build config 00:02:49.356 regex/*: missing internal dependency, "regexdev" 00:02:49.356 ml/*: missing internal dependency, "mldev" 00:02:49.356 vdpa/ifc: not in enabled drivers build config 00:02:49.356 vdpa/mlx5: not in enabled drivers build config 00:02:49.356 vdpa/nfp: not in enabled drivers build config 00:02:49.356 vdpa/sfc: not in enabled drivers build config 00:02:49.356 event/*: missing internal dependency, "eventdev" 00:02:49.356 baseband/*: missing internal dependency, "bbdev" 00:02:49.356 gpu/*: missing internal dependency, "gpudev" 00:02:49.356 00:02:49.356 00:02:49.923 Build targets in project: 85 00:02:49.923 00:02:49.923 DPDK 24.03.0 00:02:49.923 00:02:49.923 User defined options 00:02:49.923 buildtype : debug 00:02:49.923 default_library : shared 00:02:49.923 libdir : lib 00:02:49.923 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:49.923 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:49.923 c_link_args : 00:02:49.923 cpu_instruction_set: native 00:02:49.923 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:49.923 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:49.923 enable_docs : false 00:02:49.923 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:49.923 enable_kmods : false 00:02:49.923 max_lcores : 128 00:02:49.923 tests : false 00:02:49.923 00:02:49.923 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.496 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:50.496 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:50.496 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:50.496 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:50.496 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.496 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.496 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:50.496 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.496 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:50.496 [9/268] Linking static target lib/librte_kvargs.a 00:02:50.496 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:50.496 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:50.496 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:50.496 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:50.496 [14/268] Linking static target lib/librte_log.a 00:02:50.496 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:50.759 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:51.020 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.282 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:51.283 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:51.283 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:51.283 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:51.283 [22/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:51.283 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:51.283 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:51.283 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:51.283 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:51.283 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:51.283 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:51.283 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:51.283 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:51.283 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:51.283 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:51.283 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:51.283 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:51.283 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:51.283 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:51.544 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:51.544 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:51.544 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:51.544 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:51.544 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:51.544 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:51.544 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:51.544 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:51.544 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:51.544 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:51.544 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:51.544 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:51.544 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:51.544 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:51.544 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:51.544 [52/268] Linking static target lib/librte_telemetry.a 00:02:51.544 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:51.544 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:51.544 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:51.544 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:51.544 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:51.544 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:51.544 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:51.544 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:51.544 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:51.544 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:51.544 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:51.544 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:51.808 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.808 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:51.808 [67/268] Linking target lib/librte_log.so.24.1 00:02:52.069 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:52.069 [69/268] Linking static target lib/librte_pci.a 00:02:52.069 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:52.069 [71/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:52.069 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:52.069 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:52.334 [74/268] Linking target lib/librte_kvargs.so.24.1 00:02:52.334 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:52.334 [76/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:52.334 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:52.334 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:52.334 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:52.334 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:52.334 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:52.334 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:52.334 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:52.334 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:52.334 [85/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:52.334 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:52.334 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:52.334 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:52.334 [89/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:52.334 [90/268] Linking static target lib/librte_meter.a 00:02:52.334 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:52.334 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:52.334 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:52.334 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:52.334 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:52.334 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:52.334 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:52.596 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:52.596 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:52.596 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:52.596 [101/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:52.596 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:52.596 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:52.596 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:52.596 [105/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.596 [106/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:52.596 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:52.596 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:52.596 [109/268] Linking static target lib/librte_ring.a 00:02:52.596 [110/268] Linking static target lib/librte_eal.a 00:02:52.596 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:52.596 [112/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.596 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.596 [114/268] Linking target lib/librte_telemetry.so.24.1 00:02:52.596 [115/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:52.596 [116/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:52.596 [117/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.596 [118/268] Linking static target lib/librte_rcu.a 00:02:52.596 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:52.596 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:52.596 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:52.596 [122/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.857 [123/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:52.857 [124/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:52.857 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:52.857 [126/268] Linking static target lib/librte_mempool.a 00:02:52.857 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:52.857 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:52.857 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:52.857 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:52.857 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:52.857 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:52.857 [133/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:52.857 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:53.123 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.123 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:53.123 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:53.123 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:53.123 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:53.123 [140/268] Linking static target lib/librte_net.a 00:02:53.123 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:53.384 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.384 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:53.384 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:53.384 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:53.384 [146/268] Linking static target lib/librte_cmdline.a 00:02:53.384 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.384 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:53.385 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:53.385 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:53.385 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:53.385 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:53.385 [153/268] Linking static target lib/librte_timer.a 00:02:53.645 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:53.645 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:53.645 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:53.645 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:53.645 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.645 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.645 [160/268] Linking static target lib/librte_dmadev.a 00:02:53.645 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.645 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:53.645 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:53.645 [164/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.645 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.645 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:53.903 [167/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:53.903 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.903 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.904 [170/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.904 [171/268] Linking static target lib/librte_power.a 00:02:53.904 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.904 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.904 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:53.904 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.904 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:53.904 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:53.904 [178/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:53.904 [179/268] Linking static target lib/librte_hash.a 00:02:53.904 [180/268] Linking static target lib/librte_compressdev.a 00:02:53.904 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:53.904 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:53.904 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.163 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:54.163 [185/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.163 [186/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:54.163 [187/268] Linking static target lib/librte_mbuf.a 00:02:54.163 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:54.163 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:54.163 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:54.163 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.163 [192/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.163 [193/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:54.163 [194/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:54.422 [195/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:54.422 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:54.422 [197/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.422 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.422 [199/268] Linking static target lib/librte_reorder.a 00:02:54.422 [200/268] Linking static target lib/librte_security.a 00:02:54.422 [201/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.422 [202/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.422 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:54.422 [204/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.422 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.422 [206/268] Linking static target drivers/librte_bus_pci.a 00:02:54.422 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.422 [208/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:54.422 [209/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.422 [210/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.422 [211/268] Linking static target drivers/librte_bus_vdev.a 00:02:54.422 [212/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.422 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:54.679 [214/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:54.679 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.679 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.679 [217/268] Linking static target drivers/librte_mempool_ring.a 00:02:54.679 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.679 [219/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.679 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.679 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.679 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:54.679 [223/268] Linking static target lib/librte_ethdev.a 00:02:54.939 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:54.939 [225/268] Linking static target lib/librte_cryptodev.a 00:02:54.939 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.947 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.320 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.221 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.221 [230/268] Linking target lib/librte_eal.so.24.1 00:02:59.221 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.221 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:59.221 [233/268] Linking target lib/librte_ring.so.24.1 00:02:59.221 [234/268] Linking target lib/librte_pci.so.24.1 00:02:59.221 [235/268] Linking target lib/librte_timer.so.24.1 00:02:59.221 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:59.221 [237/268] Linking target lib/librte_meter.so.24.1 00:02:59.221 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:59.480 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:59.480 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:59.480 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:59.480 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:59.480 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:59.480 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:59.480 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:59.480 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:59.480 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:59.480 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:59.739 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:59.739 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:59.739 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:59.739 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:59.739 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:59.739 [254/268] Linking target lib/librte_net.so.24.1 00:02:59.739 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:59.997 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:59.997 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:59.997 [258/268] Linking target lib/librte_security.so.24.1 00:02:59.997 [259/268] Linking target lib/librte_hash.so.24.1 00:02:59.997 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:59.997 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:59.997 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:59.997 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:00.255 [264/268] Linking target lib/librte_power.so.24.1 00:03:03.538 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:03.538 [266/268] Linking static target lib/librte_vhost.a 00:03:04.104 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.104 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:04.104 INFO: autodetecting backend as ninja 00:03:04.104 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:26.030 CC lib/log/log.o 00:03:26.030 CC lib/log/log_flags.o 00:03:26.031 CC lib/log/log_deprecated.o 00:03:26.031 CC lib/ut_mock/mock.o 00:03:26.031 CC lib/ut/ut.o 00:03:26.031 LIB libspdk_ut.a 00:03:26.031 LIB libspdk_log.a 00:03:26.031 LIB libspdk_ut_mock.a 00:03:26.031 SO libspdk_ut.so.2.0 00:03:26.031 SO libspdk_log.so.7.1 00:03:26.031 SO libspdk_ut_mock.so.6.0 00:03:26.031 SYMLINK libspdk_ut.so 00:03:26.031 SYMLINK libspdk_ut_mock.so 00:03:26.031 SYMLINK libspdk_log.so 00:03:26.031 CC lib/ioat/ioat.o 00:03:26.031 CC lib/dma/dma.o 00:03:26.031 CXX lib/trace_parser/trace.o 00:03:26.031 CC lib/util/base64.o 00:03:26.031 CC lib/util/bit_array.o 00:03:26.031 CC lib/util/cpuset.o 00:03:26.031 CC lib/util/crc16.o 00:03:26.031 CC lib/util/crc32.o 00:03:26.031 CC lib/util/crc32c.o 00:03:26.031 CC lib/util/crc32_ieee.o 00:03:26.031 CC lib/util/crc64.o 00:03:26.031 CC lib/util/dif.o 00:03:26.031 CC lib/util/fd.o 00:03:26.031 CC lib/util/fd_group.o 00:03:26.031 CC lib/util/file.o 00:03:26.031 CC lib/util/hexlify.o 00:03:26.031 CC lib/util/iov.o 00:03:26.031 CC lib/util/math.o 00:03:26.031 CC lib/util/net.o 00:03:26.031 CC lib/util/pipe.o 00:03:26.031 CC lib/util/string.o 00:03:26.031 CC lib/util/strerror_tls.o 00:03:26.031 CC lib/util/uuid.o 00:03:26.031 CC lib/util/xor.o 00:03:26.031 CC lib/util/md5.o 00:03:26.031 CC lib/util/zipf.o 00:03:26.031 CC lib/vfio_user/host/vfio_user.o 00:03:26.031 CC lib/vfio_user/host/vfio_user_pci.o 00:03:26.031 LIB libspdk_ioat.a 00:03:26.031 LIB libspdk_dma.a 00:03:26.031 SO libspdk_dma.so.5.0 00:03:26.031 SO libspdk_ioat.so.7.0 00:03:26.031 SYMLINK libspdk_dma.so 00:03:26.031 SYMLINK libspdk_ioat.so 00:03:26.031 LIB libspdk_vfio_user.a 00:03:26.031 SO libspdk_vfio_user.so.5.0 00:03:26.031 SYMLINK libspdk_vfio_user.so 00:03:26.031 LIB libspdk_util.a 00:03:26.031 SO libspdk_util.so.10.0 00:03:26.031 SYMLINK libspdk_util.so 00:03:26.031 CC lib/idxd/idxd.o 00:03:26.031 CC lib/env_dpdk/env.o 00:03:26.031 CC lib/vmd/vmd.o 00:03:26.031 CC lib/conf/conf.o 00:03:26.031 CC lib/idxd/idxd_user.o 00:03:26.031 CC lib/rdma_utils/rdma_utils.o 00:03:26.031 CC lib/json/json_parse.o 00:03:26.031 CC lib/vmd/led.o 00:03:26.031 CC lib/env_dpdk/memory.o 00:03:26.031 CC lib/idxd/idxd_kernel.o 00:03:26.031 CC lib/json/json_util.o 00:03:26.031 CC lib/env_dpdk/pci.o 00:03:26.031 CC lib/json/json_write.o 00:03:26.031 CC lib/env_dpdk/init.o 00:03:26.031 CC lib/env_dpdk/threads.o 00:03:26.031 CC lib/env_dpdk/pci_ioat.o 00:03:26.031 CC lib/env_dpdk/pci_virtio.o 00:03:26.031 CC lib/env_dpdk/pci_vmd.o 00:03:26.031 CC lib/env_dpdk/pci_idxd.o 00:03:26.031 CC lib/env_dpdk/pci_event.o 00:03:26.031 CC lib/env_dpdk/sigbus_handler.o 00:03:26.031 CC lib/env_dpdk/pci_dpdk.o 00:03:26.031 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:26.031 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:26.031 LIB libspdk_trace_parser.a 00:03:26.031 SO libspdk_trace_parser.so.6.0 00:03:26.031 SYMLINK libspdk_trace_parser.so 00:03:26.031 LIB libspdk_conf.a 00:03:26.031 SO libspdk_conf.so.6.0 00:03:26.031 LIB libspdk_rdma_utils.a 00:03:26.031 LIB libspdk_json.a 00:03:26.031 SYMLINK libspdk_conf.so 00:03:26.031 SO libspdk_rdma_utils.so.1.0 00:03:26.031 SO libspdk_json.so.6.0 00:03:26.031 SYMLINK libspdk_rdma_utils.so 00:03:26.031 SYMLINK libspdk_json.so 00:03:26.031 CC lib/rdma_provider/common.o 00:03:26.031 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:26.031 CC lib/jsonrpc/jsonrpc_server.o 00:03:26.031 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:26.031 CC lib/jsonrpc/jsonrpc_client.o 00:03:26.031 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:26.031 LIB libspdk_idxd.a 00:03:26.031 SO libspdk_idxd.so.12.1 00:03:26.031 LIB libspdk_vmd.a 00:03:26.031 SYMLINK libspdk_idxd.so 00:03:26.031 SO libspdk_vmd.so.6.0 00:03:26.031 SYMLINK libspdk_vmd.so 00:03:26.290 LIB libspdk_rdma_provider.a 00:03:26.290 SO libspdk_rdma_provider.so.7.0 00:03:26.290 LIB libspdk_jsonrpc.a 00:03:26.290 SYMLINK libspdk_rdma_provider.so 00:03:26.290 SO libspdk_jsonrpc.so.6.0 00:03:26.290 SYMLINK libspdk_jsonrpc.so 00:03:26.548 CC lib/rpc/rpc.o 00:03:26.807 LIB libspdk_rpc.a 00:03:26.807 SO libspdk_rpc.so.6.0 00:03:26.807 SYMLINK libspdk_rpc.so 00:03:27.065 CC lib/notify/notify.o 00:03:27.065 CC lib/notify/notify_rpc.o 00:03:27.065 CC lib/trace/trace.o 00:03:27.065 CC lib/trace/trace_flags.o 00:03:27.065 CC lib/trace/trace_rpc.o 00:03:27.065 CC lib/keyring/keyring.o 00:03:27.065 CC lib/keyring/keyring_rpc.o 00:03:27.066 LIB libspdk_notify.a 00:03:27.066 SO libspdk_notify.so.6.0 00:03:27.066 SYMLINK libspdk_notify.so 00:03:27.066 LIB libspdk_keyring.a 00:03:27.324 LIB libspdk_trace.a 00:03:27.324 SO libspdk_keyring.so.2.0 00:03:27.324 SO libspdk_trace.so.11.0 00:03:27.324 SYMLINK libspdk_keyring.so 00:03:27.324 SYMLINK libspdk_trace.so 00:03:27.582 CC lib/thread/thread.o 00:03:27.582 CC lib/thread/iobuf.o 00:03:27.582 CC lib/sock/sock.o 00:03:27.582 CC lib/sock/sock_rpc.o 00:03:27.582 LIB libspdk_env_dpdk.a 00:03:27.582 SO libspdk_env_dpdk.so.15.0 00:03:27.582 SYMLINK libspdk_env_dpdk.so 00:03:27.840 LIB libspdk_sock.a 00:03:27.840 SO libspdk_sock.so.10.0 00:03:27.840 SYMLINK libspdk_sock.so 00:03:28.099 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:28.099 CC lib/nvme/nvme_ctrlr.o 00:03:28.099 CC lib/nvme/nvme_fabric.o 00:03:28.099 CC lib/nvme/nvme_ns_cmd.o 00:03:28.099 CC lib/nvme/nvme_ns.o 00:03:28.099 CC lib/nvme/nvme_pcie_common.o 00:03:28.099 CC lib/nvme/nvme_pcie.o 00:03:28.099 CC lib/nvme/nvme_qpair.o 00:03:28.099 CC lib/nvme/nvme.o 00:03:28.099 CC lib/nvme/nvme_quirks.o 00:03:28.099 CC lib/nvme/nvme_transport.o 00:03:28.099 CC lib/nvme/nvme_discovery.o 00:03:28.099 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:28.099 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:28.099 CC lib/nvme/nvme_tcp.o 00:03:28.099 CC lib/nvme/nvme_opal.o 00:03:28.099 CC lib/nvme/nvme_io_msg.o 00:03:28.099 CC lib/nvme/nvme_poll_group.o 00:03:28.099 CC lib/nvme/nvme_zns.o 00:03:28.099 CC lib/nvme/nvme_auth.o 00:03:28.099 CC lib/nvme/nvme_stubs.o 00:03:28.099 CC lib/nvme/nvme_cuse.o 00:03:28.099 CC lib/nvme/nvme_vfio_user.o 00:03:28.099 CC lib/nvme/nvme_rdma.o 00:03:29.036 LIB libspdk_thread.a 00:03:29.036 SO libspdk_thread.so.10.2 00:03:29.036 SYMLINK libspdk_thread.so 00:03:29.295 CC lib/accel/accel.o 00:03:29.295 CC lib/blob/blobstore.o 00:03:29.295 CC lib/init/json_config.o 00:03:29.295 CC lib/accel/accel_rpc.o 00:03:29.295 CC lib/vfu_tgt/tgt_endpoint.o 00:03:29.295 CC lib/blob/request.o 00:03:29.295 CC lib/init/subsystem.o 00:03:29.295 CC lib/accel/accel_sw.o 00:03:29.295 CC lib/fsdev/fsdev.o 00:03:29.295 CC lib/vfu_tgt/tgt_rpc.o 00:03:29.295 CC lib/virtio/virtio.o 00:03:29.295 CC lib/blob/zeroes.o 00:03:29.295 CC lib/init/subsystem_rpc.o 00:03:29.295 CC lib/fsdev/fsdev_io.o 00:03:29.295 CC lib/virtio/virtio_vhost_user.o 00:03:29.295 CC lib/blob/blob_bs_dev.o 00:03:29.295 CC lib/init/rpc.o 00:03:29.295 CC lib/fsdev/fsdev_rpc.o 00:03:29.295 CC lib/virtio/virtio_vfio_user.o 00:03:29.295 CC lib/virtio/virtio_pci.o 00:03:29.553 LIB libspdk_init.a 00:03:29.553 SO libspdk_init.so.6.0 00:03:29.553 LIB libspdk_virtio.a 00:03:29.553 SYMLINK libspdk_init.so 00:03:29.811 LIB libspdk_vfu_tgt.a 00:03:29.811 SO libspdk_virtio.so.7.0 00:03:29.811 SO libspdk_vfu_tgt.so.3.0 00:03:29.811 SYMLINK libspdk_vfu_tgt.so 00:03:29.811 SYMLINK libspdk_virtio.so 00:03:29.811 CC lib/event/app.o 00:03:29.811 CC lib/event/reactor.o 00:03:29.811 CC lib/event/log_rpc.o 00:03:29.811 CC lib/event/app_rpc.o 00:03:29.811 CC lib/event/scheduler_static.o 00:03:30.069 LIB libspdk_fsdev.a 00:03:30.069 SO libspdk_fsdev.so.1.0 00:03:30.069 SYMLINK libspdk_fsdev.so 00:03:30.327 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:30.327 LIB libspdk_event.a 00:03:30.327 SO libspdk_event.so.14.0 00:03:30.327 SYMLINK libspdk_event.so 00:03:30.585 LIB libspdk_accel.a 00:03:30.585 SO libspdk_accel.so.16.0 00:03:30.585 LIB libspdk_nvme.a 00:03:30.585 SYMLINK libspdk_accel.so 00:03:30.585 SO libspdk_nvme.so.14.0 00:03:30.843 CC lib/bdev/bdev.o 00:03:30.843 CC lib/bdev/bdev_rpc.o 00:03:30.843 CC lib/bdev/bdev_zone.o 00:03:30.843 CC lib/bdev/part.o 00:03:30.843 CC lib/bdev/scsi_nvme.o 00:03:30.843 SYMLINK libspdk_nvme.so 00:03:30.843 LIB libspdk_fuse_dispatcher.a 00:03:30.843 SO libspdk_fuse_dispatcher.so.1.0 00:03:31.100 SYMLINK libspdk_fuse_dispatcher.so 00:03:32.497 LIB libspdk_blob.a 00:03:32.497 SO libspdk_blob.so.11.0 00:03:32.497 SYMLINK libspdk_blob.so 00:03:32.754 CC lib/lvol/lvol.o 00:03:32.754 CC lib/blobfs/blobfs.o 00:03:32.754 CC lib/blobfs/tree.o 00:03:33.695 LIB libspdk_bdev.a 00:03:33.695 SO libspdk_bdev.so.17.0 00:03:33.695 LIB libspdk_blobfs.a 00:03:33.695 SO libspdk_blobfs.so.10.0 00:03:33.695 SYMLINK libspdk_bdev.so 00:03:33.695 SYMLINK libspdk_blobfs.so 00:03:33.695 LIB libspdk_lvol.a 00:03:33.695 SO libspdk_lvol.so.10.0 00:03:33.695 SYMLINK libspdk_lvol.so 00:03:33.695 CC lib/nbd/nbd.o 00:03:33.695 CC lib/nbd/nbd_rpc.o 00:03:33.695 CC lib/ftl/ftl_core.o 00:03:33.695 CC lib/ftl/ftl_init.o 00:03:33.695 CC lib/ublk/ublk.o 00:03:33.695 CC lib/ftl/ftl_layout.o 00:03:33.695 CC lib/ublk/ublk_rpc.o 00:03:33.695 CC lib/ftl/ftl_debug.o 00:03:33.695 CC lib/ftl/ftl_io.o 00:03:33.695 CC lib/scsi/dev.o 00:03:33.695 CC lib/ftl/ftl_sb.o 00:03:33.695 CC lib/nvmf/ctrlr.o 00:03:33.695 CC lib/ftl/ftl_l2p.o 00:03:33.695 CC lib/scsi/lun.o 00:03:33.695 CC lib/nvmf/ctrlr_discovery.o 00:03:33.695 CC lib/ftl/ftl_l2p_flat.o 00:03:33.695 CC lib/scsi/port.o 00:03:33.695 CC lib/nvmf/ctrlr_bdev.o 00:03:33.695 CC lib/ftl/ftl_nv_cache.o 00:03:33.695 CC lib/scsi/scsi.o 00:03:33.695 CC lib/nvmf/subsystem.o 00:03:33.695 CC lib/ftl/ftl_band.o 00:03:33.695 CC lib/scsi/scsi_bdev.o 00:03:33.695 CC lib/nvmf/nvmf_rpc.o 00:03:33.695 CC lib/nvmf/nvmf.o 00:03:33.695 CC lib/scsi/scsi_pr.o 00:03:33.695 CC lib/nvmf/transport.o 00:03:33.695 CC lib/scsi/scsi_rpc.o 00:03:33.695 CC lib/ftl/ftl_band_ops.o 00:03:33.695 CC lib/ftl/ftl_writer.o 00:03:33.695 CC lib/nvmf/tcp.o 00:03:33.695 CC lib/scsi/task.o 00:03:33.695 CC lib/ftl/ftl_rq.o 00:03:33.695 CC lib/nvmf/stubs.o 00:03:33.695 CC lib/ftl/ftl_reloc.o 00:03:33.695 CC lib/nvmf/mdns_server.o 00:03:33.695 CC lib/ftl/ftl_l2p_cache.o 00:03:33.695 CC lib/ftl/ftl_p2l.o 00:03:33.695 CC lib/nvmf/vfio_user.o 00:03:33.695 CC lib/nvmf/rdma.o 00:03:33.695 CC lib/ftl/ftl_p2l_log.o 00:03:33.695 CC lib/nvmf/auth.o 00:03:33.695 CC lib/ftl/mngt/ftl_mngt.o 00:03:33.695 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:33.695 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:33.695 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:33.695 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:33.695 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:34.272 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:34.272 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:34.272 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:34.272 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:34.272 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:34.272 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:34.272 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:34.272 CC lib/ftl/utils/ftl_conf.o 00:03:34.272 CC lib/ftl/utils/ftl_md.o 00:03:34.272 CC lib/ftl/utils/ftl_mempool.o 00:03:34.272 CC lib/ftl/utils/ftl_bitmap.o 00:03:34.272 CC lib/ftl/utils/ftl_property.o 00:03:34.272 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:34.272 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:34.272 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:34.272 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:34.272 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:34.272 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:34.532 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:34.532 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:34.532 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:34.532 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:34.532 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:34.532 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:34.532 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:34.532 CC lib/ftl/base/ftl_base_dev.o 00:03:34.532 CC lib/ftl/base/ftl_base_bdev.o 00:03:34.532 CC lib/ftl/ftl_trace.o 00:03:34.532 LIB libspdk_nbd.a 00:03:34.790 SO libspdk_nbd.so.7.0 00:03:34.790 SYMLINK libspdk_nbd.so 00:03:34.790 LIB libspdk_scsi.a 00:03:34.790 SO libspdk_scsi.so.9.0 00:03:34.790 SYMLINK libspdk_scsi.so 00:03:35.048 LIB libspdk_ublk.a 00:03:35.048 SO libspdk_ublk.so.3.0 00:03:35.048 SYMLINK libspdk_ublk.so 00:03:35.048 CC lib/vhost/vhost.o 00:03:35.048 CC lib/iscsi/conn.o 00:03:35.048 CC lib/iscsi/init_grp.o 00:03:35.048 CC lib/vhost/vhost_rpc.o 00:03:35.048 CC lib/vhost/vhost_scsi.o 00:03:35.048 CC lib/iscsi/iscsi.o 00:03:35.048 CC lib/iscsi/param.o 00:03:35.048 CC lib/vhost/vhost_blk.o 00:03:35.048 CC lib/vhost/rte_vhost_user.o 00:03:35.048 CC lib/iscsi/portal_grp.o 00:03:35.048 CC lib/iscsi/tgt_node.o 00:03:35.048 CC lib/iscsi/iscsi_subsystem.o 00:03:35.048 CC lib/iscsi/iscsi_rpc.o 00:03:35.048 CC lib/iscsi/task.o 00:03:35.306 LIB libspdk_ftl.a 00:03:35.564 SO libspdk_ftl.so.9.0 00:03:35.822 SYMLINK libspdk_ftl.so 00:03:36.389 LIB libspdk_vhost.a 00:03:36.389 SO libspdk_vhost.so.8.0 00:03:36.389 SYMLINK libspdk_vhost.so 00:03:36.389 LIB libspdk_nvmf.a 00:03:36.649 LIB libspdk_iscsi.a 00:03:36.649 SO libspdk_iscsi.so.8.0 00:03:36.649 SO libspdk_nvmf.so.19.1 00:03:36.649 SYMLINK libspdk_iscsi.so 00:03:36.649 SYMLINK libspdk_nvmf.so 00:03:37.215 CC module/env_dpdk/env_dpdk_rpc.o 00:03:37.215 CC module/vfu_device/vfu_virtio.o 00:03:37.215 CC module/vfu_device/vfu_virtio_blk.o 00:03:37.215 CC module/vfu_device/vfu_virtio_scsi.o 00:03:37.215 CC module/vfu_device/vfu_virtio_rpc.o 00:03:37.215 CC module/vfu_device/vfu_virtio_fs.o 00:03:37.215 CC module/sock/posix/posix.o 00:03:37.216 CC module/accel/dsa/accel_dsa.o 00:03:37.216 CC module/accel/dsa/accel_dsa_rpc.o 00:03:37.216 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:37.216 CC module/accel/ioat/accel_ioat.o 00:03:37.216 CC module/accel/ioat/accel_ioat_rpc.o 00:03:37.216 CC module/accel/iaa/accel_iaa.o 00:03:37.216 CC module/keyring/linux/keyring.o 00:03:37.216 CC module/fsdev/aio/fsdev_aio.o 00:03:37.216 CC module/accel/iaa/accel_iaa_rpc.o 00:03:37.216 CC module/blob/bdev/blob_bdev.o 00:03:37.216 CC module/keyring/linux/keyring_rpc.o 00:03:37.216 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:37.216 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:37.216 CC module/fsdev/aio/linux_aio_mgr.o 00:03:37.216 CC module/scheduler/gscheduler/gscheduler.o 00:03:37.216 CC module/keyring/file/keyring.o 00:03:37.216 CC module/keyring/file/keyring_rpc.o 00:03:37.216 CC module/accel/error/accel_error_rpc.o 00:03:37.216 CC module/accel/error/accel_error.o 00:03:37.216 LIB libspdk_env_dpdk_rpc.a 00:03:37.216 SO libspdk_env_dpdk_rpc.so.6.0 00:03:37.216 SYMLINK libspdk_env_dpdk_rpc.so 00:03:37.216 LIB libspdk_keyring_linux.a 00:03:37.216 LIB libspdk_scheduler_gscheduler.a 00:03:37.216 LIB libspdk_scheduler_dpdk_governor.a 00:03:37.474 SO libspdk_keyring_linux.so.1.0 00:03:37.474 SO libspdk_scheduler_gscheduler.so.4.0 00:03:37.474 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:37.474 LIB libspdk_accel_error.a 00:03:37.474 LIB libspdk_accel_ioat.a 00:03:37.474 LIB libspdk_scheduler_dynamic.a 00:03:37.474 SO libspdk_accel_error.so.2.0 00:03:37.474 LIB libspdk_keyring_file.a 00:03:37.474 LIB libspdk_accel_iaa.a 00:03:37.474 SYMLINK libspdk_scheduler_gscheduler.so 00:03:37.474 SYMLINK libspdk_keyring_linux.so 00:03:37.474 SO libspdk_accel_ioat.so.6.0 00:03:37.474 SO libspdk_scheduler_dynamic.so.4.0 00:03:37.474 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:37.474 SO libspdk_keyring_file.so.2.0 00:03:37.474 SO libspdk_accel_iaa.so.3.0 00:03:37.474 SYMLINK libspdk_accel_error.so 00:03:37.474 SYMLINK libspdk_accel_ioat.so 00:03:37.474 SYMLINK libspdk_scheduler_dynamic.so 00:03:37.474 SYMLINK libspdk_keyring_file.so 00:03:37.474 LIB libspdk_accel_dsa.a 00:03:37.474 SYMLINK libspdk_accel_iaa.so 00:03:37.474 SO libspdk_accel_dsa.so.5.0 00:03:37.474 LIB libspdk_blob_bdev.a 00:03:37.474 SYMLINK libspdk_accel_dsa.so 00:03:37.474 SO libspdk_blob_bdev.so.11.0 00:03:37.733 SYMLINK libspdk_blob_bdev.so 00:03:37.733 LIB libspdk_vfu_device.a 00:03:37.733 SO libspdk_vfu_device.so.3.0 00:03:37.733 SYMLINK libspdk_vfu_device.so 00:03:37.993 LIB libspdk_fsdev_aio.a 00:03:37.993 CC module/bdev/malloc/bdev_malloc.o 00:03:37.993 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:37.993 CC module/bdev/lvol/vbdev_lvol.o 00:03:37.993 CC module/bdev/delay/vbdev_delay.o 00:03:37.993 CC module/bdev/passthru/vbdev_passthru.o 00:03:37.993 CC module/bdev/nvme/bdev_nvme.o 00:03:37.993 CC module/bdev/gpt/gpt.o 00:03:37.993 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:37.993 CC module/bdev/null/bdev_null.o 00:03:37.993 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:37.993 CC module/bdev/error/vbdev_error.o 00:03:37.993 CC module/bdev/null/bdev_null_rpc.o 00:03:37.993 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:37.993 CC module/bdev/gpt/vbdev_gpt.o 00:03:37.993 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:37.993 CC module/bdev/nvme/nvme_rpc.o 00:03:37.993 CC module/blobfs/bdev/blobfs_bdev.o 00:03:37.993 CC module/bdev/error/vbdev_error_rpc.o 00:03:37.993 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:37.993 CC module/bdev/nvme/bdev_mdns_client.o 00:03:37.993 CC module/bdev/raid/bdev_raid.o 00:03:37.993 CC module/bdev/split/vbdev_split.o 00:03:37.993 CC module/bdev/nvme/vbdev_opal.o 00:03:37.993 CC module/bdev/split/vbdev_split_rpc.o 00:03:37.993 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:37.993 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:37.993 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:37.993 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:37.993 CC module/bdev/iscsi/bdev_iscsi.o 00:03:37.993 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:37.993 CC module/bdev/raid/bdev_raid_rpc.o 00:03:37.993 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:37.993 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:37.993 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:37.993 CC module/bdev/raid/raid0.o 00:03:37.993 CC module/bdev/raid/bdev_raid_sb.o 00:03:37.993 CC module/bdev/aio/bdev_aio.o 00:03:37.993 CC module/bdev/aio/bdev_aio_rpc.o 00:03:37.993 CC module/bdev/raid/raid1.o 00:03:37.993 CC module/bdev/raid/concat.o 00:03:37.993 CC module/bdev/ftl/bdev_ftl.o 00:03:37.993 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:37.993 SO libspdk_fsdev_aio.so.1.0 00:03:37.993 SYMLINK libspdk_fsdev_aio.so 00:03:38.251 LIB libspdk_bdev_error.a 00:03:38.251 LIB libspdk_sock_posix.a 00:03:38.251 SO libspdk_bdev_error.so.6.0 00:03:38.251 SO libspdk_sock_posix.so.6.0 00:03:38.251 LIB libspdk_blobfs_bdev.a 00:03:38.251 SO libspdk_blobfs_bdev.so.6.0 00:03:38.251 SYMLINK libspdk_bdev_error.so 00:03:38.251 SYMLINK libspdk_sock_posix.so 00:03:38.251 LIB libspdk_bdev_split.a 00:03:38.251 LIB libspdk_bdev_null.a 00:03:38.251 SYMLINK libspdk_blobfs_bdev.so 00:03:38.251 SO libspdk_bdev_split.so.6.0 00:03:38.251 SO libspdk_bdev_null.so.6.0 00:03:38.510 LIB libspdk_bdev_passthru.a 00:03:38.510 LIB libspdk_bdev_gpt.a 00:03:38.510 LIB libspdk_bdev_ftl.a 00:03:38.510 SO libspdk_bdev_passthru.so.6.0 00:03:38.510 SO libspdk_bdev_gpt.so.6.0 00:03:38.510 SYMLINK libspdk_bdev_split.so 00:03:38.510 SYMLINK libspdk_bdev_null.so 00:03:38.510 SO libspdk_bdev_ftl.so.6.0 00:03:38.510 LIB libspdk_bdev_iscsi.a 00:03:38.510 LIB libspdk_bdev_zone_block.a 00:03:38.510 LIB libspdk_bdev_aio.a 00:03:38.510 SO libspdk_bdev_iscsi.so.6.0 00:03:38.510 SO libspdk_bdev_zone_block.so.6.0 00:03:38.510 SYMLINK libspdk_bdev_passthru.so 00:03:38.510 SYMLINK libspdk_bdev_gpt.so 00:03:38.510 LIB libspdk_bdev_malloc.a 00:03:38.510 SYMLINK libspdk_bdev_ftl.so 00:03:38.510 SO libspdk_bdev_aio.so.6.0 00:03:38.510 SO libspdk_bdev_malloc.so.6.0 00:03:38.510 SYMLINK libspdk_bdev_iscsi.so 00:03:38.510 SYMLINK libspdk_bdev_zone_block.so 00:03:38.510 LIB libspdk_bdev_delay.a 00:03:38.510 SYMLINK libspdk_bdev_aio.so 00:03:38.510 SYMLINK libspdk_bdev_malloc.so 00:03:38.510 SO libspdk_bdev_delay.so.6.0 00:03:38.510 LIB libspdk_bdev_lvol.a 00:03:38.510 SYMLINK libspdk_bdev_delay.so 00:03:38.510 SO libspdk_bdev_lvol.so.6.0 00:03:38.768 SYMLINK libspdk_bdev_lvol.so 00:03:38.768 LIB libspdk_bdev_virtio.a 00:03:38.768 SO libspdk_bdev_virtio.so.6.0 00:03:38.768 SYMLINK libspdk_bdev_virtio.so 00:03:39.068 LIB libspdk_bdev_raid.a 00:03:39.350 SO libspdk_bdev_raid.so.6.0 00:03:39.350 SYMLINK libspdk_bdev_raid.so 00:03:40.288 LIB libspdk_bdev_nvme.a 00:03:40.288 SO libspdk_bdev_nvme.so.7.0 00:03:40.545 SYMLINK libspdk_bdev_nvme.so 00:03:40.804 CC module/event/subsystems/iobuf/iobuf.o 00:03:40.804 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:40.804 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:40.804 CC module/event/subsystems/fsdev/fsdev.o 00:03:40.804 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:40.804 CC module/event/subsystems/keyring/keyring.o 00:03:40.804 CC module/event/subsystems/scheduler/scheduler.o 00:03:40.804 CC module/event/subsystems/vmd/vmd.o 00:03:40.804 CC module/event/subsystems/sock/sock.o 00:03:40.804 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:41.062 LIB libspdk_event_keyring.a 00:03:41.062 LIB libspdk_event_fsdev.a 00:03:41.062 LIB libspdk_event_vfu_tgt.a 00:03:41.062 LIB libspdk_event_vmd.a 00:03:41.062 LIB libspdk_event_scheduler.a 00:03:41.062 LIB libspdk_event_sock.a 00:03:41.062 LIB libspdk_event_vhost_blk.a 00:03:41.062 SO libspdk_event_keyring.so.1.0 00:03:41.062 LIB libspdk_event_iobuf.a 00:03:41.062 SO libspdk_event_fsdev.so.1.0 00:03:41.062 SO libspdk_event_scheduler.so.4.0 00:03:41.062 SO libspdk_event_vfu_tgt.so.3.0 00:03:41.062 SO libspdk_event_sock.so.5.0 00:03:41.062 SO libspdk_event_vmd.so.6.0 00:03:41.062 SO libspdk_event_vhost_blk.so.3.0 00:03:41.062 SO libspdk_event_iobuf.so.3.0 00:03:41.062 SYMLINK libspdk_event_keyring.so 00:03:41.062 SYMLINK libspdk_event_fsdev.so 00:03:41.062 SYMLINK libspdk_event_vfu_tgt.so 00:03:41.062 SYMLINK libspdk_event_scheduler.so 00:03:41.062 SYMLINK libspdk_event_sock.so 00:03:41.062 SYMLINK libspdk_event_vhost_blk.so 00:03:41.062 SYMLINK libspdk_event_vmd.so 00:03:41.062 SYMLINK libspdk_event_iobuf.so 00:03:41.321 CC module/event/subsystems/accel/accel.o 00:03:41.321 LIB libspdk_event_accel.a 00:03:41.321 SO libspdk_event_accel.so.6.0 00:03:41.321 SYMLINK libspdk_event_accel.so 00:03:41.578 CC module/event/subsystems/bdev/bdev.o 00:03:41.836 LIB libspdk_event_bdev.a 00:03:41.836 SO libspdk_event_bdev.so.6.0 00:03:41.836 SYMLINK libspdk_event_bdev.so 00:03:42.094 CC module/event/subsystems/nbd/nbd.o 00:03:42.094 CC module/event/subsystems/ublk/ublk.o 00:03:42.094 CC module/event/subsystems/scsi/scsi.o 00:03:42.094 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:42.094 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:42.094 LIB libspdk_event_ublk.a 00:03:42.094 LIB libspdk_event_nbd.a 00:03:42.094 LIB libspdk_event_scsi.a 00:03:42.094 SO libspdk_event_ublk.so.3.0 00:03:42.094 SO libspdk_event_nbd.so.6.0 00:03:42.094 SO libspdk_event_scsi.so.6.0 00:03:42.350 SYMLINK libspdk_event_nbd.so 00:03:42.350 SYMLINK libspdk_event_ublk.so 00:03:42.350 SYMLINK libspdk_event_scsi.so 00:03:42.350 LIB libspdk_event_nvmf.a 00:03:42.350 SO libspdk_event_nvmf.so.6.0 00:03:42.350 SYMLINK libspdk_event_nvmf.so 00:03:42.350 CC module/event/subsystems/iscsi/iscsi.o 00:03:42.350 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:42.608 LIB libspdk_event_vhost_scsi.a 00:03:42.608 SO libspdk_event_vhost_scsi.so.3.0 00:03:42.609 LIB libspdk_event_iscsi.a 00:03:42.609 SO libspdk_event_iscsi.so.6.0 00:03:42.609 SYMLINK libspdk_event_vhost_scsi.so 00:03:42.609 SYMLINK libspdk_event_iscsi.so 00:03:42.866 SO libspdk.so.6.0 00:03:42.866 SYMLINK libspdk.so 00:03:42.866 CC app/trace_record/trace_record.o 00:03:42.866 CXX app/trace/trace.o 00:03:42.866 CC app/spdk_nvme_discover/discovery_aer.o 00:03:42.866 CC app/spdk_nvme_perf/perf.o 00:03:42.866 CC test/rpc_client/rpc_client_test.o 00:03:42.866 TEST_HEADER include/spdk/accel.h 00:03:42.866 CC app/spdk_lspci/spdk_lspci.o 00:03:42.866 TEST_HEADER include/spdk/accel_module.h 00:03:42.866 TEST_HEADER include/spdk/assert.h 00:03:42.866 TEST_HEADER include/spdk/barrier.h 00:03:42.866 CC app/spdk_nvme_identify/identify.o 00:03:42.866 TEST_HEADER include/spdk/base64.h 00:03:42.866 CC app/spdk_top/spdk_top.o 00:03:42.866 TEST_HEADER include/spdk/bdev.h 00:03:42.866 TEST_HEADER include/spdk/bdev_module.h 00:03:42.866 TEST_HEADER include/spdk/bdev_zone.h 00:03:42.866 TEST_HEADER include/spdk/bit_array.h 00:03:42.866 TEST_HEADER include/spdk/bit_pool.h 00:03:42.866 TEST_HEADER include/spdk/blob_bdev.h 00:03:42.866 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:42.866 TEST_HEADER include/spdk/blobfs.h 00:03:42.866 TEST_HEADER include/spdk/blob.h 00:03:42.866 TEST_HEADER include/spdk/conf.h 00:03:42.866 TEST_HEADER include/spdk/config.h 00:03:42.866 TEST_HEADER include/spdk/cpuset.h 00:03:42.866 TEST_HEADER include/spdk/crc16.h 00:03:42.866 TEST_HEADER include/spdk/crc32.h 00:03:42.866 TEST_HEADER include/spdk/crc64.h 00:03:43.131 TEST_HEADER include/spdk/dif.h 00:03:43.131 TEST_HEADER include/spdk/dma.h 00:03:43.131 TEST_HEADER include/spdk/endian.h 00:03:43.131 TEST_HEADER include/spdk/env_dpdk.h 00:03:43.131 TEST_HEADER include/spdk/env.h 00:03:43.131 TEST_HEADER include/spdk/event.h 00:03:43.131 TEST_HEADER include/spdk/fd_group.h 00:03:43.131 TEST_HEADER include/spdk/fd.h 00:03:43.131 TEST_HEADER include/spdk/file.h 00:03:43.131 TEST_HEADER include/spdk/fsdev.h 00:03:43.131 TEST_HEADER include/spdk/ftl.h 00:03:43.131 TEST_HEADER include/spdk/fsdev_module.h 00:03:43.131 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:43.131 TEST_HEADER include/spdk/gpt_spec.h 00:03:43.131 TEST_HEADER include/spdk/hexlify.h 00:03:43.131 TEST_HEADER include/spdk/histogram_data.h 00:03:43.131 TEST_HEADER include/spdk/idxd.h 00:03:43.131 TEST_HEADER include/spdk/idxd_spec.h 00:03:43.131 TEST_HEADER include/spdk/init.h 00:03:43.131 TEST_HEADER include/spdk/ioat.h 00:03:43.131 TEST_HEADER include/spdk/ioat_spec.h 00:03:43.131 TEST_HEADER include/spdk/json.h 00:03:43.131 TEST_HEADER include/spdk/iscsi_spec.h 00:03:43.131 TEST_HEADER include/spdk/jsonrpc.h 00:03:43.131 TEST_HEADER include/spdk/keyring.h 00:03:43.131 TEST_HEADER include/spdk/keyring_module.h 00:03:43.131 TEST_HEADER include/spdk/likely.h 00:03:43.131 TEST_HEADER include/spdk/log.h 00:03:43.131 TEST_HEADER include/spdk/lvol.h 00:03:43.131 TEST_HEADER include/spdk/md5.h 00:03:43.131 TEST_HEADER include/spdk/mmio.h 00:03:43.131 TEST_HEADER include/spdk/memory.h 00:03:43.131 TEST_HEADER include/spdk/nbd.h 00:03:43.131 TEST_HEADER include/spdk/net.h 00:03:43.131 TEST_HEADER include/spdk/nvme.h 00:03:43.131 TEST_HEADER include/spdk/notify.h 00:03:43.131 TEST_HEADER include/spdk/nvme_intel.h 00:03:43.131 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:43.131 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:43.131 TEST_HEADER include/spdk/nvme_spec.h 00:03:43.131 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:43.131 TEST_HEADER include/spdk/nvme_zns.h 00:03:43.131 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:43.131 TEST_HEADER include/spdk/nvmf.h 00:03:43.131 TEST_HEADER include/spdk/nvmf_transport.h 00:03:43.131 TEST_HEADER include/spdk/nvmf_spec.h 00:03:43.131 TEST_HEADER include/spdk/opal.h 00:03:43.131 TEST_HEADER include/spdk/opal_spec.h 00:03:43.131 TEST_HEADER include/spdk/pci_ids.h 00:03:43.131 TEST_HEADER include/spdk/pipe.h 00:03:43.131 TEST_HEADER include/spdk/queue.h 00:03:43.131 TEST_HEADER include/spdk/reduce.h 00:03:43.131 TEST_HEADER include/spdk/rpc.h 00:03:43.131 TEST_HEADER include/spdk/scheduler.h 00:03:43.131 TEST_HEADER include/spdk/scsi.h 00:03:43.131 TEST_HEADER include/spdk/scsi_spec.h 00:03:43.131 TEST_HEADER include/spdk/sock.h 00:03:43.131 TEST_HEADER include/spdk/stdinc.h 00:03:43.132 TEST_HEADER include/spdk/thread.h 00:03:43.132 TEST_HEADER include/spdk/string.h 00:03:43.132 TEST_HEADER include/spdk/trace.h 00:03:43.132 TEST_HEADER include/spdk/trace_parser.h 00:03:43.132 TEST_HEADER include/spdk/tree.h 00:03:43.132 TEST_HEADER include/spdk/ublk.h 00:03:43.132 TEST_HEADER include/spdk/util.h 00:03:43.132 TEST_HEADER include/spdk/uuid.h 00:03:43.132 TEST_HEADER include/spdk/version.h 00:03:43.132 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:43.132 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:43.132 TEST_HEADER include/spdk/vhost.h 00:03:43.132 TEST_HEADER include/spdk/vmd.h 00:03:43.132 TEST_HEADER include/spdk/zipf.h 00:03:43.132 TEST_HEADER include/spdk/xor.h 00:03:43.132 CXX test/cpp_headers/accel.o 00:03:43.132 CXX test/cpp_headers/accel_module.o 00:03:43.132 CXX test/cpp_headers/assert.o 00:03:43.132 CXX test/cpp_headers/barrier.o 00:03:43.132 CXX test/cpp_headers/base64.o 00:03:43.132 CXX test/cpp_headers/bdev.o 00:03:43.132 CXX test/cpp_headers/bdev_module.o 00:03:43.132 CXX test/cpp_headers/bdev_zone.o 00:03:43.132 CXX test/cpp_headers/bit_array.o 00:03:43.132 CC app/spdk_dd/spdk_dd.o 00:03:43.132 CXX test/cpp_headers/bit_pool.o 00:03:43.132 CXX test/cpp_headers/blob_bdev.o 00:03:43.132 CXX test/cpp_headers/blobfs_bdev.o 00:03:43.132 CXX test/cpp_headers/blobfs.o 00:03:43.132 CXX test/cpp_headers/blob.o 00:03:43.132 CXX test/cpp_headers/conf.o 00:03:43.132 CXX test/cpp_headers/config.o 00:03:43.132 CXX test/cpp_headers/cpuset.o 00:03:43.132 CXX test/cpp_headers/crc16.o 00:03:43.132 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:43.132 CC app/nvmf_tgt/nvmf_main.o 00:03:43.132 CC app/iscsi_tgt/iscsi_tgt.o 00:03:43.132 CXX test/cpp_headers/crc32.o 00:03:43.132 CC test/thread/poller_perf/poller_perf.o 00:03:43.132 CC examples/ioat/verify/verify.o 00:03:43.132 CC test/env/vtophys/vtophys.o 00:03:43.132 CC examples/ioat/perf/perf.o 00:03:43.132 CC test/env/memory/memory_ut.o 00:03:43.132 CC app/spdk_tgt/spdk_tgt.o 00:03:43.132 CC app/fio/nvme/fio_plugin.o 00:03:43.132 CC examples/util/zipf/zipf.o 00:03:43.132 CC test/env/pci/pci_ut.o 00:03:43.132 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:43.132 CC test/app/histogram_perf/histogram_perf.o 00:03:43.132 CC test/app/jsoncat/jsoncat.o 00:03:43.132 CC test/app/stub/stub.o 00:03:43.132 CC test/dma/test_dma/test_dma.o 00:03:43.132 CC app/fio/bdev/fio_plugin.o 00:03:43.132 CC test/app/bdev_svc/bdev_svc.o 00:03:43.394 CC test/env/mem_callbacks/mem_callbacks.o 00:03:43.394 LINK spdk_lspci 00:03:43.394 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:43.394 LINK rpc_client_test 00:03:43.394 LINK spdk_nvme_discover 00:03:43.394 CXX test/cpp_headers/crc64.o 00:03:43.394 LINK poller_perf 00:03:43.394 LINK nvmf_tgt 00:03:43.394 LINK histogram_perf 00:03:43.394 LINK zipf 00:03:43.394 LINK jsoncat 00:03:43.394 LINK vtophys 00:03:43.394 CXX test/cpp_headers/dif.o 00:03:43.394 CXX test/cpp_headers/dma.o 00:03:43.659 LINK env_dpdk_post_init 00:03:43.659 LINK spdk_trace_record 00:03:43.659 LINK interrupt_tgt 00:03:43.659 CXX test/cpp_headers/endian.o 00:03:43.659 CXX test/cpp_headers/env_dpdk.o 00:03:43.659 CXX test/cpp_headers/env.o 00:03:43.659 CXX test/cpp_headers/event.o 00:03:43.659 CXX test/cpp_headers/fd_group.o 00:03:43.659 CXX test/cpp_headers/fd.o 00:03:43.659 CXX test/cpp_headers/file.o 00:03:43.659 CXX test/cpp_headers/fsdev.o 00:03:43.659 LINK stub 00:03:43.659 LINK iscsi_tgt 00:03:43.659 CXX test/cpp_headers/fsdev_module.o 00:03:43.659 CXX test/cpp_headers/ftl.o 00:03:43.659 LINK ioat_perf 00:03:43.659 CXX test/cpp_headers/fuse_dispatcher.o 00:03:43.659 CXX test/cpp_headers/gpt_spec.o 00:03:43.659 CXX test/cpp_headers/hexlify.o 00:03:43.659 LINK verify 00:03:43.659 LINK spdk_tgt 00:03:43.659 LINK bdev_svc 00:03:43.659 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:43.659 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:43.659 CXX test/cpp_headers/histogram_data.o 00:03:43.660 CXX test/cpp_headers/idxd.o 00:03:43.660 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:43.921 CXX test/cpp_headers/idxd_spec.o 00:03:43.921 CXX test/cpp_headers/init.o 00:03:43.921 LINK spdk_dd 00:03:43.921 CXX test/cpp_headers/ioat.o 00:03:43.921 CXX test/cpp_headers/ioat_spec.o 00:03:43.921 CXX test/cpp_headers/iscsi_spec.o 00:03:43.921 CXX test/cpp_headers/json.o 00:03:43.921 CXX test/cpp_headers/jsonrpc.o 00:03:43.921 CXX test/cpp_headers/keyring.o 00:03:43.921 LINK spdk_trace 00:03:43.921 CXX test/cpp_headers/keyring_module.o 00:03:43.921 CXX test/cpp_headers/likely.o 00:03:43.921 CXX test/cpp_headers/log.o 00:03:43.921 CXX test/cpp_headers/lvol.o 00:03:43.921 CXX test/cpp_headers/md5.o 00:03:43.921 CXX test/cpp_headers/memory.o 00:03:43.921 LINK pci_ut 00:03:43.921 CXX test/cpp_headers/mmio.o 00:03:43.921 CXX test/cpp_headers/nbd.o 00:03:43.921 CXX test/cpp_headers/net.o 00:03:43.921 CXX test/cpp_headers/notify.o 00:03:43.921 CXX test/cpp_headers/nvme.o 00:03:43.921 CXX test/cpp_headers/nvme_intel.o 00:03:43.921 CXX test/cpp_headers/nvme_ocssd.o 00:03:44.186 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:44.186 CXX test/cpp_headers/nvme_spec.o 00:03:44.186 CXX test/cpp_headers/nvme_zns.o 00:03:44.186 CXX test/cpp_headers/nvmf_cmd.o 00:03:44.186 CC test/event/event_perf/event_perf.o 00:03:44.186 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:44.186 CC test/event/reactor/reactor.o 00:03:44.186 CC test/event/reactor_perf/reactor_perf.o 00:03:44.186 CXX test/cpp_headers/nvmf.o 00:03:44.186 CC test/event/app_repeat/app_repeat.o 00:03:44.186 CC examples/idxd/perf/perf.o 00:03:44.186 CC examples/vmd/lsvmd/lsvmd.o 00:03:44.186 LINK test_dma 00:03:44.186 CC examples/sock/hello_world/hello_sock.o 00:03:44.186 CXX test/cpp_headers/nvmf_spec.o 00:03:44.186 CXX test/cpp_headers/nvmf_transport.o 00:03:44.186 LINK nvme_fuzz 00:03:44.186 LINK spdk_nvme 00:03:44.186 CC examples/vmd/led/led.o 00:03:44.186 CC examples/thread/thread/thread_ex.o 00:03:44.186 LINK spdk_bdev 00:03:44.450 CC test/event/scheduler/scheduler.o 00:03:44.450 CXX test/cpp_headers/opal.o 00:03:44.450 CXX test/cpp_headers/opal_spec.o 00:03:44.450 CXX test/cpp_headers/pci_ids.o 00:03:44.450 CXX test/cpp_headers/pipe.o 00:03:44.450 CXX test/cpp_headers/queue.o 00:03:44.450 CXX test/cpp_headers/reduce.o 00:03:44.450 CXX test/cpp_headers/rpc.o 00:03:44.450 CXX test/cpp_headers/scheduler.o 00:03:44.450 CXX test/cpp_headers/scsi.o 00:03:44.450 CXX test/cpp_headers/scsi_spec.o 00:03:44.450 CXX test/cpp_headers/sock.o 00:03:44.450 CXX test/cpp_headers/stdinc.o 00:03:44.450 CXX test/cpp_headers/string.o 00:03:44.450 CXX test/cpp_headers/thread.o 00:03:44.450 CXX test/cpp_headers/trace.o 00:03:44.450 CXX test/cpp_headers/trace_parser.o 00:03:44.450 CXX test/cpp_headers/tree.o 00:03:44.450 CXX test/cpp_headers/ublk.o 00:03:44.450 LINK reactor 00:03:44.450 LINK reactor_perf 00:03:44.450 CXX test/cpp_headers/util.o 00:03:44.450 CXX test/cpp_headers/uuid.o 00:03:44.450 LINK event_perf 00:03:44.450 CXX test/cpp_headers/version.o 00:03:44.450 CXX test/cpp_headers/vfio_user_pci.o 00:03:44.450 CXX test/cpp_headers/vfio_user_spec.o 00:03:44.450 CC app/vhost/vhost.o 00:03:44.450 CXX test/cpp_headers/vhost.o 00:03:44.450 LINK lsvmd 00:03:44.450 LINK app_repeat 00:03:44.450 CXX test/cpp_headers/vmd.o 00:03:44.713 LINK vhost_fuzz 00:03:44.713 LINK mem_callbacks 00:03:44.713 CXX test/cpp_headers/xor.o 00:03:44.713 CXX test/cpp_headers/zipf.o 00:03:44.713 LINK spdk_nvme_perf 00:03:44.713 LINK led 00:03:44.713 LINK spdk_nvme_identify 00:03:44.713 LINK spdk_top 00:03:44.713 LINK hello_sock 00:03:44.713 LINK thread 00:03:44.973 LINK scheduler 00:03:44.973 LINK idxd_perf 00:03:44.973 LINK vhost 00:03:44.973 CC test/nvme/aer/aer.o 00:03:44.973 CC test/nvme/connect_stress/connect_stress.o 00:03:44.973 CC test/nvme/reset/reset.o 00:03:44.973 CC test/nvme/startup/startup.o 00:03:44.973 CC test/nvme/e2edp/nvme_dp.o 00:03:44.973 CC test/nvme/overhead/overhead.o 00:03:44.973 CC test/nvme/sgl/sgl.o 00:03:44.973 CC test/nvme/simple_copy/simple_copy.o 00:03:44.973 CC test/nvme/err_injection/err_injection.o 00:03:44.973 CC test/nvme/reserve/reserve.o 00:03:44.973 CC test/nvme/boot_partition/boot_partition.o 00:03:44.973 CC test/nvme/compliance/nvme_compliance.o 00:03:44.973 CC test/nvme/fused_ordering/fused_ordering.o 00:03:44.973 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:44.973 CC test/nvme/cuse/cuse.o 00:03:44.973 CC test/nvme/fdp/fdp.o 00:03:44.973 CC test/blobfs/mkfs/mkfs.o 00:03:44.973 CC test/accel/dif/dif.o 00:03:44.973 CC test/lvol/esnap/esnap.o 00:03:45.231 LINK startup 00:03:45.231 LINK boot_partition 00:03:45.231 LINK connect_stress 00:03:45.231 LINK doorbell_aers 00:03:45.231 LINK fused_ordering 00:03:45.231 LINK reserve 00:03:45.231 LINK err_injection 00:03:45.231 LINK simple_copy 00:03:45.231 LINK mkfs 00:03:45.231 CC examples/nvme/abort/abort.o 00:03:45.231 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:45.231 CC examples/nvme/hello_world/hello_world.o 00:03:45.231 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:45.231 CC examples/nvme/reconnect/reconnect.o 00:03:45.231 CC examples/nvme/hotplug/hotplug.o 00:03:45.231 CC examples/nvme/arbitration/arbitration.o 00:03:45.231 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:45.231 LINK nvme_dp 00:03:45.231 LINK aer 00:03:45.231 LINK overhead 00:03:45.231 CC examples/accel/perf/accel_perf.o 00:03:45.489 CC examples/blob/cli/blobcli.o 00:03:45.489 CC examples/blob/hello_world/hello_blob.o 00:03:45.489 LINK memory_ut 00:03:45.489 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:45.489 LINK reset 00:03:45.489 LINK sgl 00:03:45.489 LINK pmr_persistence 00:03:45.489 LINK cmb_copy 00:03:45.489 LINK fdp 00:03:45.489 LINK nvme_compliance 00:03:45.747 LINK hello_world 00:03:45.747 LINK hotplug 00:03:45.747 LINK arbitration 00:03:45.747 LINK hello_fsdev 00:03:45.747 LINK reconnect 00:03:45.747 LINK abort 00:03:45.747 LINK hello_blob 00:03:45.747 LINK nvme_manage 00:03:46.006 LINK accel_perf 00:03:46.006 LINK blobcli 00:03:46.006 LINK dif 00:03:46.263 LINK iscsi_fuzz 00:03:46.263 CC examples/bdev/hello_world/hello_bdev.o 00:03:46.263 CC examples/bdev/bdevperf/bdevperf.o 00:03:46.263 CC test/bdev/bdevio/bdevio.o 00:03:46.521 LINK hello_bdev 00:03:46.779 LINK cuse 00:03:46.779 LINK bdevio 00:03:47.037 LINK bdevperf 00:03:47.603 CC examples/nvmf/nvmf/nvmf.o 00:03:47.860 LINK nvmf 00:03:51.143 LINK esnap 00:03:51.143 00:03:51.143 real 1m10.962s 00:03:51.143 user 11m54.705s 00:03:51.143 sys 2m38.859s 00:03:51.143 16:32:04 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:51.143 16:32:04 make -- common/autotest_common.sh@10 -- $ set +x 00:03:51.143 ************************************ 00:03:51.143 END TEST make 00:03:51.143 ************************************ 00:03:51.143 16:32:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:51.143 16:32:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:51.143 16:32:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:51.143 16:32:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.143 16:32:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:51.143 16:32:04 -- pm/common@44 -- $ pid=2153533 00:03:51.143 16:32:04 -- pm/common@50 -- $ kill -TERM 2153533 00:03:51.143 16:32:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.143 16:32:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:51.143 16:32:04 -- pm/common@44 -- $ pid=2153535 00:03:51.143 16:32:04 -- pm/common@50 -- $ kill -TERM 2153535 00:03:51.143 16:32:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.143 16:32:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:51.143 16:32:04 -- pm/common@44 -- $ pid=2153537 00:03:51.143 16:32:04 -- pm/common@50 -- $ kill -TERM 2153537 00:03:51.143 16:32:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.143 16:32:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:51.143 16:32:04 -- pm/common@44 -- $ pid=2153565 00:03:51.143 16:32:04 -- pm/common@50 -- $ sudo -E kill -TERM 2153565 00:03:51.402 16:32:04 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:51.402 16:32:04 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:51.402 16:32:04 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:51.402 16:32:04 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:51.402 16:32:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.402 16:32:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.402 16:32:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.402 16:32:04 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.402 16:32:04 -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.402 16:32:04 -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.402 16:32:04 -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.402 16:32:04 -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.402 16:32:04 -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.402 16:32:04 -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.402 16:32:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.402 16:32:04 -- scripts/common.sh@344 -- # case "$op" in 00:03:51.402 16:32:04 -- scripts/common.sh@345 -- # : 1 00:03:51.402 16:32:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.402 16:32:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.402 16:32:04 -- scripts/common.sh@365 -- # decimal 1 00:03:51.402 16:32:04 -- scripts/common.sh@353 -- # local d=1 00:03:51.402 16:32:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.402 16:32:04 -- scripts/common.sh@355 -- # echo 1 00:03:51.402 16:32:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.402 16:32:04 -- scripts/common.sh@366 -- # decimal 2 00:03:51.402 16:32:04 -- scripts/common.sh@353 -- # local d=2 00:03:51.402 16:32:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.402 16:32:04 -- scripts/common.sh@355 -- # echo 2 00:03:51.402 16:32:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.402 16:32:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.402 16:32:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.402 16:32:04 -- scripts/common.sh@368 -- # return 0 00:03:51.402 16:32:04 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.402 16:32:04 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:51.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.402 --rc genhtml_branch_coverage=1 00:03:51.402 --rc genhtml_function_coverage=1 00:03:51.402 --rc genhtml_legend=1 00:03:51.402 --rc geninfo_all_blocks=1 00:03:51.402 --rc geninfo_unexecuted_blocks=1 00:03:51.402 00:03:51.402 ' 00:03:51.402 16:32:04 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:51.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.402 --rc genhtml_branch_coverage=1 00:03:51.402 --rc genhtml_function_coverage=1 00:03:51.402 --rc genhtml_legend=1 00:03:51.402 --rc geninfo_all_blocks=1 00:03:51.402 --rc geninfo_unexecuted_blocks=1 00:03:51.402 00:03:51.402 ' 00:03:51.402 16:32:04 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:51.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.402 --rc genhtml_branch_coverage=1 00:03:51.402 --rc genhtml_function_coverage=1 00:03:51.402 --rc genhtml_legend=1 00:03:51.402 --rc geninfo_all_blocks=1 00:03:51.402 --rc geninfo_unexecuted_blocks=1 00:03:51.402 00:03:51.402 ' 00:03:51.402 16:32:04 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:51.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.402 --rc genhtml_branch_coverage=1 00:03:51.402 --rc genhtml_function_coverage=1 00:03:51.402 --rc genhtml_legend=1 00:03:51.402 --rc geninfo_all_blocks=1 00:03:51.402 --rc geninfo_unexecuted_blocks=1 00:03:51.402 00:03:51.402 ' 00:03:51.402 16:32:04 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:51.402 16:32:04 -- nvmf/common.sh@7 -- # uname -s 00:03:51.402 16:32:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:51.402 16:32:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:51.402 16:32:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:51.402 16:32:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:51.402 16:32:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:51.402 16:32:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:51.402 16:32:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:51.402 16:32:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:51.402 16:32:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:51.402 16:32:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:51.402 16:32:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:51.402 16:32:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:51.402 16:32:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:51.402 16:32:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:51.402 16:32:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:51.402 16:32:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:51.402 16:32:04 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:51.402 16:32:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:51.402 16:32:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:51.402 16:32:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.402 16:32:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.402 16:32:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.402 16:32:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.402 16:32:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.402 16:32:04 -- paths/export.sh@5 -- # export PATH 00:03:51.402 16:32:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.402 16:32:04 -- nvmf/common.sh@51 -- # : 0 00:03:51.402 16:32:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:51.402 16:32:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:51.402 16:32:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:51.402 16:32:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:51.402 16:32:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:51.402 16:32:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:51.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:51.402 16:32:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:51.402 16:32:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:51.402 16:32:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:51.402 16:32:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:51.402 16:32:04 -- spdk/autotest.sh@32 -- # uname -s 00:03:51.402 16:32:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:51.402 16:32:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:51.402 16:32:04 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:51.402 16:32:04 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:51.402 16:32:04 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:51.402 16:32:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:51.402 16:32:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:51.402 16:32:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:51.402 16:32:04 -- spdk/autotest.sh@48 -- # udevadm_pid=2212972 00:03:51.402 16:32:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:51.402 16:32:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:51.402 16:32:04 -- pm/common@17 -- # local monitor 00:03:51.402 16:32:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.402 16:32:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.402 16:32:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.402 16:32:04 -- pm/common@21 -- # date +%s 00:03:51.402 16:32:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.402 16:32:04 -- pm/common@21 -- # date +%s 00:03:51.402 16:32:04 -- pm/common@25 -- # sleep 1 00:03:51.402 16:32:04 -- pm/common@21 -- # date +%s 00:03:51.402 16:32:04 -- pm/common@21 -- # date +%s 00:03:51.402 16:32:04 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729175524 00:03:51.402 16:32:04 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729175524 00:03:51.402 16:32:04 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729175524 00:03:51.402 16:32:04 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729175524 00:03:51.402 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729175524_collect-cpu-load.pm.log 00:03:51.402 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729175524_collect-vmstat.pm.log 00:03:51.403 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729175524_collect-cpu-temp.pm.log 00:03:51.403 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729175524_collect-bmc-pm.bmc.pm.log 00:03:52.337 16:32:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:52.337 16:32:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:52.337 16:32:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:52.337 16:32:05 -- common/autotest_common.sh@10 -- # set +x 00:03:52.337 16:32:05 -- spdk/autotest.sh@59 -- # create_test_list 00:03:52.337 16:32:05 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:52.337 16:32:05 -- common/autotest_common.sh@10 -- # set +x 00:03:52.337 16:32:06 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:52.337 16:32:06 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.337 16:32:06 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.337 16:32:06 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:52.337 16:32:06 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.337 16:32:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:52.337 16:32:06 -- common/autotest_common.sh@1455 -- # uname 00:03:52.337 16:32:06 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:52.337 16:32:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:52.337 16:32:06 -- common/autotest_common.sh@1475 -- # uname 00:03:52.337 16:32:06 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:52.337 16:32:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:52.337 16:32:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:52.596 lcov: LCOV version 1.15 00:03:52.596 16:32:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:14.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:14.512 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:36.440 16:32:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:36.440 16:32:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.440 16:32:47 -- common/autotest_common.sh@10 -- # set +x 00:04:36.440 16:32:47 -- spdk/autotest.sh@78 -- # rm -f 00:04:36.440 16:32:47 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.440 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:36.440 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:36.440 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:36.440 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:36.440 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:36.440 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:36.440 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:36.440 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:36.440 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:04:36.440 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:36.440 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:36.440 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:36.440 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:36.440 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:36.440 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:36.440 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:36.440 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:36.440 16:32:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:36.440 16:32:48 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:36.440 16:32:48 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:36.440 16:32:48 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:36.440 16:32:48 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.440 16:32:48 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:36.440 16:32:48 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:36.440 16:32:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:36.440 16:32:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.440 16:32:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:36.440 16:32:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.440 16:32:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:36.440 16:32:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:36.440 16:32:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:36.440 16:32:48 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:36.440 No valid GPT data, bailing 00:04:36.440 16:32:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:36.440 16:32:49 -- scripts/common.sh@394 -- # pt= 00:04:36.440 16:32:49 -- scripts/common.sh@395 -- # return 1 00:04:36.440 16:32:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:36.440 1+0 records in 00:04:36.440 1+0 records out 00:04:36.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00221581 s, 473 MB/s 00:04:36.440 16:32:49 -- spdk/autotest.sh@105 -- # sync 00:04:36.440 16:32:49 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:36.440 16:32:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:36.440 16:32:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.814 16:32:51 -- spdk/autotest.sh@111 -- # uname -s 00:04:37.814 16:32:51 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:37.814 16:32:51 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:37.814 16:32:51 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:38.750 Hugepages 00:04:38.750 node hugesize free / total 00:04:38.750 node0 1048576kB 0 / 0 00:04:38.750 node0 2048kB 0 / 0 00:04:38.750 node1 1048576kB 0 / 0 00:04:38.750 node1 2048kB 0 / 0 00:04:38.750 00:04:38.750 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.750 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:38.750 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:38.750 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:38.750 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:38.750 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:38.750 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:38.750 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:38.750 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:38.750 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:38.750 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:38.750 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:38.750 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:38.750 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:38.750 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:38.750 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:38.750 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:38.750 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:38.750 16:32:52 -- spdk/autotest.sh@117 -- # uname -s 00:04:38.750 16:32:52 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:38.750 16:32:52 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:38.750 16:32:52 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.226 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:40.226 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:40.226 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:40.226 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:40.226 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:40.226 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:40.226 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:40.226 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:40.226 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:40.226 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:40.226 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:40.226 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:40.226 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:40.226 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:40.226 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:40.226 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:40.792 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:41.049 16:32:54 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:41.983 16:32:55 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:41.983 16:32:55 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:41.983 16:32:55 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:41.983 16:32:55 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:41.983 16:32:55 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:41.983 16:32:55 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:41.983 16:32:55 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.983 16:32:55 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:41.983 16:32:55 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:42.242 16:32:55 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:42.242 16:32:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:04:42.242 16:32:55 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.178 Waiting for block devices as requested 00:04:43.178 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:43.436 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:43.436 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:43.436 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:43.693 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:43.693 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:43.693 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:43.693 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:43.952 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:04:43.952 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:43.952 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:44.210 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:44.210 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:44.210 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:44.210 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:44.469 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:44.469 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:44.739 16:32:58 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:44.739 16:32:58 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:04:44.739 16:32:58 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:44.739 16:32:58 -- common/autotest_common.sh@1485 -- # grep 0000:0b:00.0/nvme/nvme 00:04:44.739 16:32:58 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:44.739 16:32:58 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:04:44.739 16:32:58 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:44.739 16:32:58 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:44.739 16:32:58 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:44.739 16:32:58 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:44.739 16:32:58 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:44.739 16:32:58 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:44.739 16:32:58 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:44.739 16:32:58 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:04:44.739 16:32:58 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:44.739 16:32:58 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:44.739 16:32:58 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:44.739 16:32:58 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:44.739 16:32:58 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:44.739 16:32:58 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:44.739 16:32:58 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:44.739 16:32:58 -- common/autotest_common.sh@1541 -- # continue 00:04:44.739 16:32:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:44.739 16:32:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:44.739 16:32:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.739 16:32:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:44.739 16:32:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.739 16:32:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.739 16:32:58 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:45.676 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:45.934 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:45.934 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:45.934 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:45.934 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:45.934 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:45.934 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:45.934 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:45.934 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:45.934 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:45.934 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:45.934 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:45.934 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:45.934 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:45.934 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:45.934 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:46.870 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:47.129 16:33:00 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:47.129 16:33:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:47.129 16:33:00 -- common/autotest_common.sh@10 -- # set +x 00:04:47.129 16:33:00 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:47.129 16:33:00 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:47.129 16:33:00 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:47.129 16:33:00 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:47.129 16:33:00 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:47.129 16:33:00 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:47.129 16:33:00 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:47.129 16:33:00 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:47.129 16:33:00 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:47.129 16:33:00 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:47.129 16:33:00 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:47.129 16:33:00 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:47.129 16:33:00 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:47.129 16:33:00 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:47.129 16:33:00 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:04:47.129 16:33:00 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:47.129 16:33:00 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:04:47.129 16:33:00 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:47.129 16:33:00 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:47.129 16:33:00 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:47.129 16:33:00 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:47.129 16:33:00 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:0b:00.0 00:04:47.129 16:33:00 -- common/autotest_common.sh@1577 -- # [[ -z 0000:0b:00.0 ]] 00:04:47.129 16:33:00 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2223944 00:04:47.129 16:33:00 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.129 16:33:00 -- common/autotest_common.sh@1583 -- # waitforlisten 2223944 00:04:47.129 16:33:00 -- common/autotest_common.sh@831 -- # '[' -z 2223944 ']' 00:04:47.129 16:33:00 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.129 16:33:00 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.129 16:33:00 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.129 16:33:00 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.129 16:33:00 -- common/autotest_common.sh@10 -- # set +x 00:04:47.129 [2024-10-17 16:33:00.710304] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:04:47.129 [2024-10-17 16:33:00.710398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223944 ] 00:04:47.129 [2024-10-17 16:33:00.772311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.388 [2024-10-17 16:33:00.836283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.646 16:33:01 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.646 16:33:01 -- common/autotest_common.sh@864 -- # return 0 00:04:47.646 16:33:01 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:47.646 16:33:01 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:47.646 16:33:01 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:04:50.928 nvme0n1 00:04:50.928 16:33:04 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:50.928 [2024-10-17 16:33:04.458572] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:50.928 [2024-10-17 16:33:04.458623] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:50.928 request: 00:04:50.928 { 00:04:50.928 "nvme_ctrlr_name": "nvme0", 00:04:50.928 "password": "test", 00:04:50.928 "method": "bdev_nvme_opal_revert", 00:04:50.928 "req_id": 1 00:04:50.928 } 00:04:50.928 Got JSON-RPC error response 00:04:50.928 response: 00:04:50.928 { 00:04:50.928 "code": -32603, 00:04:50.928 "message": "Internal error" 00:04:50.928 } 00:04:50.928 16:33:04 -- common/autotest_common.sh@1589 -- # true 00:04:50.928 16:33:04 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:50.928 16:33:04 -- common/autotest_common.sh@1593 -- # killprocess 2223944 00:04:50.928 16:33:04 -- common/autotest_common.sh@950 -- # '[' -z 2223944 ']' 00:04:50.928 16:33:04 -- common/autotest_common.sh@954 -- # kill -0 2223944 00:04:50.928 16:33:04 -- common/autotest_common.sh@955 -- # uname 00:04:50.928 16:33:04 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.928 16:33:04 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2223944 00:04:50.928 16:33:04 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.928 16:33:04 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.928 16:33:04 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2223944' 00:04:50.928 killing process with pid 2223944 00:04:50.928 16:33:04 -- common/autotest_common.sh@969 -- # kill 2223944 00:04:50.928 16:33:04 -- common/autotest_common.sh@974 -- # wait 2223944 00:04:52.828 16:33:06 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:52.828 16:33:06 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:52.828 16:33:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:52.828 16:33:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:52.828 16:33:06 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:52.828 16:33:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.828 16:33:06 -- common/autotest_common.sh@10 -- # set +x 00:04:52.828 16:33:06 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:52.828 16:33:06 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:52.828 16:33:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.828 16:33:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.828 16:33:06 -- common/autotest_common.sh@10 -- # set +x 00:04:52.828 ************************************ 00:04:52.828 START TEST env 00:04:52.828 ************************************ 00:04:52.828 16:33:06 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:52.828 * Looking for test storage... 00:04:52.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:52.828 16:33:06 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:52.828 16:33:06 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.828 16:33:06 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:52.828 16:33:06 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.828 16:33:06 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.828 16:33:06 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.828 16:33:06 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.828 16:33:06 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.828 16:33:06 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.828 16:33:06 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.828 16:33:06 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.828 16:33:06 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.828 16:33:06 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.828 16:33:06 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.828 16:33:06 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.828 16:33:06 env -- scripts/common.sh@344 -- # case "$op" in 00:04:52.828 16:33:06 env -- scripts/common.sh@345 -- # : 1 00:04:52.828 16:33:06 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.828 16:33:06 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.828 16:33:06 env -- scripts/common.sh@365 -- # decimal 1 00:04:52.828 16:33:06 env -- scripts/common.sh@353 -- # local d=1 00:04:52.828 16:33:06 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.828 16:33:06 env -- scripts/common.sh@355 -- # echo 1 00:04:52.828 16:33:06 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.828 16:33:06 env -- scripts/common.sh@366 -- # decimal 2 00:04:52.828 16:33:06 env -- scripts/common.sh@353 -- # local d=2 00:04:52.828 16:33:06 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.828 16:33:06 env -- scripts/common.sh@355 -- # echo 2 00:04:52.828 16:33:06 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.828 16:33:06 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.828 16:33:06 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.828 16:33:06 env -- scripts/common.sh@368 -- # return 0 00:04:52.828 16:33:06 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.828 16:33:06 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.828 --rc genhtml_branch_coverage=1 00:04:52.828 --rc genhtml_function_coverage=1 00:04:52.828 --rc genhtml_legend=1 00:04:52.828 --rc geninfo_all_blocks=1 00:04:52.828 --rc geninfo_unexecuted_blocks=1 00:04:52.828 00:04:52.828 ' 00:04:52.828 16:33:06 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.828 --rc genhtml_branch_coverage=1 00:04:52.828 --rc genhtml_function_coverage=1 00:04:52.828 --rc genhtml_legend=1 00:04:52.828 --rc geninfo_all_blocks=1 00:04:52.828 --rc geninfo_unexecuted_blocks=1 00:04:52.828 00:04:52.828 ' 00:04:52.828 16:33:06 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.828 --rc genhtml_branch_coverage=1 00:04:52.828 --rc genhtml_function_coverage=1 00:04:52.828 --rc genhtml_legend=1 00:04:52.828 --rc geninfo_all_blocks=1 00:04:52.828 --rc geninfo_unexecuted_blocks=1 00:04:52.828 00:04:52.828 ' 00:04:52.828 16:33:06 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.828 --rc genhtml_branch_coverage=1 00:04:52.828 --rc genhtml_function_coverage=1 00:04:52.828 --rc genhtml_legend=1 00:04:52.828 --rc geninfo_all_blocks=1 00:04:52.828 --rc geninfo_unexecuted_blocks=1 00:04:52.828 00:04:52.828 ' 00:04:52.828 16:33:06 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:52.828 16:33:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.828 16:33:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.828 16:33:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.828 ************************************ 00:04:52.828 START TEST env_memory 00:04:52.828 ************************************ 00:04:52.828 16:33:06 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:52.828 00:04:52.828 00:04:52.828 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.828 http://cunit.sourceforge.net/ 00:04:52.828 00:04:52.828 00:04:52.828 Suite: memory 00:04:52.828 Test: alloc and free memory map ...[2024-10-17 16:33:06.472261] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:52.828 passed 00:04:52.828 Test: mem map translation ...[2024-10-17 16:33:06.492551] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:52.828 [2024-10-17 16:33:06.492575] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:52.829 [2024-10-17 16:33:06.492626] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:52.829 [2024-10-17 16:33:06.492638] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:53.087 passed 00:04:53.087 Test: mem map registration ...[2024-10-17 16:33:06.534441] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:53.087 [2024-10-17 16:33:06.534463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:53.087 passed 00:04:53.087 Test: mem map adjacent registrations ...passed 00:04:53.087 00:04:53.087 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.087 suites 1 1 n/a 0 0 00:04:53.087 tests 4 4 4 0 0 00:04:53.087 asserts 152 152 152 0 n/a 00:04:53.087 00:04:53.087 Elapsed time = 0.144 seconds 00:04:53.087 00:04:53.087 real 0m0.152s 00:04:53.087 user 0m0.142s 00:04:53.087 sys 0m0.009s 00:04:53.087 16:33:06 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.087 16:33:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:53.087 ************************************ 00:04:53.087 END TEST env_memory 00:04:53.087 ************************************ 00:04:53.087 16:33:06 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.087 16:33:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.087 16:33:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.087 16:33:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.087 ************************************ 00:04:53.087 START TEST env_vtophys 00:04:53.087 ************************************ 00:04:53.087 16:33:06 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.087 EAL: lib.eal log level changed from notice to debug 00:04:53.087 EAL: Detected lcore 0 as core 0 on socket 0 00:04:53.087 EAL: Detected lcore 1 as core 1 on socket 0 00:04:53.087 EAL: Detected lcore 2 as core 2 on socket 0 00:04:53.087 EAL: Detected lcore 3 as core 3 on socket 0 00:04:53.087 EAL: Detected lcore 4 as core 4 on socket 0 00:04:53.087 EAL: Detected lcore 5 as core 5 on socket 0 00:04:53.087 EAL: Detected lcore 6 as core 8 on socket 0 00:04:53.087 EAL: Detected lcore 7 as core 9 on socket 0 00:04:53.087 EAL: Detected lcore 8 as core 10 on socket 0 00:04:53.087 EAL: Detected lcore 9 as core 11 on socket 0 00:04:53.087 EAL: Detected lcore 10 as core 12 on socket 0 00:04:53.087 EAL: Detected lcore 11 as core 13 on socket 0 00:04:53.087 EAL: Detected lcore 12 as core 0 on socket 1 00:04:53.087 EAL: Detected lcore 13 as core 1 on socket 1 00:04:53.087 EAL: Detected lcore 14 as core 2 on socket 1 00:04:53.087 EAL: Detected lcore 15 as core 3 on socket 1 00:04:53.087 EAL: Detected lcore 16 as core 4 on socket 1 00:04:53.087 EAL: Detected lcore 17 as core 5 on socket 1 00:04:53.087 EAL: Detected lcore 18 as core 8 on socket 1 00:04:53.087 EAL: Detected lcore 19 as core 9 on socket 1 00:04:53.087 EAL: Detected lcore 20 as core 10 on socket 1 00:04:53.087 EAL: Detected lcore 21 as core 11 on socket 1 00:04:53.087 EAL: Detected lcore 22 as core 12 on socket 1 00:04:53.087 EAL: Detected lcore 23 as core 13 on socket 1 00:04:53.087 EAL: Detected lcore 24 as core 0 on socket 0 00:04:53.087 EAL: Detected lcore 25 as core 1 on socket 0 00:04:53.087 EAL: Detected lcore 26 as core 2 on socket 0 00:04:53.087 EAL: Detected lcore 27 as core 3 on socket 0 00:04:53.087 EAL: Detected lcore 28 as core 4 on socket 0 00:04:53.087 EAL: Detected lcore 29 as core 5 on socket 0 00:04:53.087 EAL: Detected lcore 30 as core 8 on socket 0 00:04:53.087 EAL: Detected lcore 31 as core 9 on socket 0 00:04:53.087 EAL: Detected lcore 32 as core 10 on socket 0 00:04:53.088 EAL: Detected lcore 33 as core 11 on socket 0 00:04:53.088 EAL: Detected lcore 34 as core 12 on socket 0 00:04:53.088 EAL: Detected lcore 35 as core 13 on socket 0 00:04:53.088 EAL: Detected lcore 36 as core 0 on socket 1 00:04:53.088 EAL: Detected lcore 37 as core 1 on socket 1 00:04:53.088 EAL: Detected lcore 38 as core 2 on socket 1 00:04:53.088 EAL: Detected lcore 39 as core 3 on socket 1 00:04:53.088 EAL: Detected lcore 40 as core 4 on socket 1 00:04:53.088 EAL: Detected lcore 41 as core 5 on socket 1 00:04:53.088 EAL: Detected lcore 42 as core 8 on socket 1 00:04:53.088 EAL: Detected lcore 43 as core 9 on socket 1 00:04:53.088 EAL: Detected lcore 44 as core 10 on socket 1 00:04:53.088 EAL: Detected lcore 45 as core 11 on socket 1 00:04:53.088 EAL: Detected lcore 46 as core 12 on socket 1 00:04:53.088 EAL: Detected lcore 47 as core 13 on socket 1 00:04:53.088 EAL: Maximum logical cores by configuration: 128 00:04:53.088 EAL: Detected CPU lcores: 48 00:04:53.088 EAL: Detected NUMA nodes: 2 00:04:53.088 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:53.088 EAL: Detected shared linkage of DPDK 00:04:53.088 EAL: No shared files mode enabled, IPC will be disabled 00:04:53.088 EAL: Bus pci wants IOVA as 'DC' 00:04:53.088 EAL: Buses did not request a specific IOVA mode. 00:04:53.088 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:53.088 EAL: Selected IOVA mode 'VA' 00:04:53.088 EAL: Probing VFIO support... 00:04:53.088 EAL: IOMMU type 1 (Type 1) is supported 00:04:53.088 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:53.088 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:53.088 EAL: VFIO support initialized 00:04:53.088 EAL: Ask a virtual area of 0x2e000 bytes 00:04:53.088 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:53.088 EAL: Setting up physically contiguous memory... 00:04:53.088 EAL: Setting maximum number of open files to 524288 00:04:53.088 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:53.088 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:53.088 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:53.088 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.088 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:53.088 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.088 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.088 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:53.088 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:53.088 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.088 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:53.088 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.088 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.088 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:53.088 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:53.088 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.088 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:53.088 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.088 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.088 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:53.088 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:53.088 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.088 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:53.088 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.088 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.088 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:53.088 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:53.088 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:53.088 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.088 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:53.088 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.088 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.088 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:53.088 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:53.088 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.088 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:53.088 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.088 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.088 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:53.088 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:53.088 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.088 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:53.088 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.088 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.088 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:53.088 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:53.088 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.088 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:53.088 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.088 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.088 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:53.088 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:53.088 EAL: Hugepages will be freed exactly as allocated. 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: TSC frequency is ~2700000 KHz 00:04:53.088 EAL: Main lcore 0 is ready (tid=7f612a83ea00;cpuset=[0]) 00:04:53.088 EAL: Trying to obtain current memory policy. 00:04:53.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.088 EAL: Restoring previous memory policy: 0 00:04:53.088 EAL: request: mp_malloc_sync 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: Heap on socket 0 was expanded by 2MB 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:53.088 EAL: Mem event callback 'spdk:(nil)' registered 00:04:53.088 00:04:53.088 00:04:53.088 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.088 http://cunit.sourceforge.net/ 00:04:53.088 00:04:53.088 00:04:53.088 Suite: components_suite 00:04:53.088 Test: vtophys_malloc_test ...passed 00:04:53.088 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:53.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.088 EAL: Restoring previous memory policy: 4 00:04:53.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.088 EAL: request: mp_malloc_sync 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: Heap on socket 0 was expanded by 4MB 00:04:53.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.088 EAL: request: mp_malloc_sync 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: Heap on socket 0 was shrunk by 4MB 00:04:53.088 EAL: Trying to obtain current memory policy. 00:04:53.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.088 EAL: Restoring previous memory policy: 4 00:04:53.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.088 EAL: request: mp_malloc_sync 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: Heap on socket 0 was expanded by 6MB 00:04:53.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.088 EAL: request: mp_malloc_sync 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: Heap on socket 0 was shrunk by 6MB 00:04:53.088 EAL: Trying to obtain current memory policy. 00:04:53.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.088 EAL: Restoring previous memory policy: 4 00:04:53.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.088 EAL: request: mp_malloc_sync 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: Heap on socket 0 was expanded by 10MB 00:04:53.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.088 EAL: request: mp_malloc_sync 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: Heap on socket 0 was shrunk by 10MB 00:04:53.088 EAL: Trying to obtain current memory policy. 00:04:53.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.088 EAL: Restoring previous memory policy: 4 00:04:53.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.088 EAL: request: mp_malloc_sync 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: Heap on socket 0 was expanded by 18MB 00:04:53.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.088 EAL: request: mp_malloc_sync 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: Heap on socket 0 was shrunk by 18MB 00:04:53.088 EAL: Trying to obtain current memory policy. 00:04:53.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.088 EAL: Restoring previous memory policy: 4 00:04:53.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.088 EAL: request: mp_malloc_sync 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: Heap on socket 0 was expanded by 34MB 00:04:53.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.088 EAL: request: mp_malloc_sync 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: Heap on socket 0 was shrunk by 34MB 00:04:53.088 EAL: Trying to obtain current memory policy. 00:04:53.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.088 EAL: Restoring previous memory policy: 4 00:04:53.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.088 EAL: request: mp_malloc_sync 00:04:53.088 EAL: No shared files mode enabled, IPC is disabled 00:04:53.088 EAL: Heap on socket 0 was expanded by 66MB 00:04:53.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.346 EAL: request: mp_malloc_sync 00:04:53.346 EAL: No shared files mode enabled, IPC is disabled 00:04:53.346 EAL: Heap on socket 0 was shrunk by 66MB 00:04:53.346 EAL: Trying to obtain current memory policy. 00:04:53.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.346 EAL: Restoring previous memory policy: 4 00:04:53.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.346 EAL: request: mp_malloc_sync 00:04:53.346 EAL: No shared files mode enabled, IPC is disabled 00:04:53.346 EAL: Heap on socket 0 was expanded by 130MB 00:04:53.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.346 EAL: request: mp_malloc_sync 00:04:53.346 EAL: No shared files mode enabled, IPC is disabled 00:04:53.346 EAL: Heap on socket 0 was shrunk by 130MB 00:04:53.346 EAL: Trying to obtain current memory policy. 00:04:53.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.346 EAL: Restoring previous memory policy: 4 00:04:53.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.346 EAL: request: mp_malloc_sync 00:04:53.346 EAL: No shared files mode enabled, IPC is disabled 00:04:53.346 EAL: Heap on socket 0 was expanded by 258MB 00:04:53.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.346 EAL: request: mp_malloc_sync 00:04:53.346 EAL: No shared files mode enabled, IPC is disabled 00:04:53.346 EAL: Heap on socket 0 was shrunk by 258MB 00:04:53.346 EAL: Trying to obtain current memory policy. 00:04:53.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.605 EAL: Restoring previous memory policy: 4 00:04:53.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.605 EAL: request: mp_malloc_sync 00:04:53.605 EAL: No shared files mode enabled, IPC is disabled 00:04:53.605 EAL: Heap on socket 0 was expanded by 514MB 00:04:53.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.863 EAL: request: mp_malloc_sync 00:04:53.863 EAL: No shared files mode enabled, IPC is disabled 00:04:53.863 EAL: Heap on socket 0 was shrunk by 514MB 00:04:53.863 EAL: Trying to obtain current memory policy. 00:04:53.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.121 EAL: Restoring previous memory policy: 4 00:04:54.121 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.121 EAL: request: mp_malloc_sync 00:04:54.121 EAL: No shared files mode enabled, IPC is disabled 00:04:54.121 EAL: Heap on socket 0 was expanded by 1026MB 00:04:54.380 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.380 EAL: request: mp_malloc_sync 00:04:54.380 EAL: No shared files mode enabled, IPC is disabled 00:04:54.380 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:54.380 passed 00:04:54.380 00:04:54.380 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.380 suites 1 1 n/a 0 0 00:04:54.380 tests 2 2 2 0 0 00:04:54.380 asserts 497 497 497 0 n/a 00:04:54.380 00:04:54.380 Elapsed time = 1.302 seconds 00:04:54.380 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.380 EAL: request: mp_malloc_sync 00:04:54.380 EAL: No shared files mode enabled, IPC is disabled 00:04:54.380 EAL: Heap on socket 0 was shrunk by 2MB 00:04:54.380 EAL: No shared files mode enabled, IPC is disabled 00:04:54.380 EAL: No shared files mode enabled, IPC is disabled 00:04:54.380 EAL: No shared files mode enabled, IPC is disabled 00:04:54.380 00:04:54.380 real 0m1.424s 00:04:54.380 user 0m0.815s 00:04:54.380 sys 0m0.570s 00:04:54.380 16:33:08 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.380 16:33:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:54.380 ************************************ 00:04:54.380 END TEST env_vtophys 00:04:54.380 ************************************ 00:04:54.639 16:33:08 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.639 16:33:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.639 16:33:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.639 16:33:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.639 ************************************ 00:04:54.639 START TEST env_pci 00:04:54.639 ************************************ 00:04:54.639 16:33:08 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.639 00:04:54.639 00:04:54.639 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.639 http://cunit.sourceforge.net/ 00:04:54.639 00:04:54.639 00:04:54.639 Suite: pci 00:04:54.639 Test: pci_hook ...[2024-10-17 16:33:08.115890] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2225231 has claimed it 00:04:54.639 EAL: Cannot find device (10000:00:01.0) 00:04:54.639 EAL: Failed to attach device on primary process 00:04:54.639 passed 00:04:54.639 00:04:54.639 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.639 suites 1 1 n/a 0 0 00:04:54.639 tests 1 1 1 0 0 00:04:54.639 asserts 25 25 25 0 n/a 00:04:54.639 00:04:54.639 Elapsed time = 0.020 seconds 00:04:54.639 00:04:54.639 real 0m0.032s 00:04:54.639 user 0m0.015s 00:04:54.639 sys 0m0.017s 00:04:54.639 16:33:08 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.639 16:33:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:54.639 ************************************ 00:04:54.639 END TEST env_pci 00:04:54.639 ************************************ 00:04:54.639 16:33:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:54.639 16:33:08 env -- env/env.sh@15 -- # uname 00:04:54.639 16:33:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:54.639 16:33:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:54.639 16:33:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.639 16:33:08 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:54.639 16:33:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.639 16:33:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.639 ************************************ 00:04:54.639 START TEST env_dpdk_post_init 00:04:54.639 ************************************ 00:04:54.639 16:33:08 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.639 EAL: Detected CPU lcores: 48 00:04:54.639 EAL: Detected NUMA nodes: 2 00:04:54.639 EAL: Detected shared linkage of DPDK 00:04:54.639 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.639 EAL: Selected IOVA mode 'VA' 00:04:54.639 EAL: VFIO support initialized 00:04:54.639 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.639 EAL: Using IOMMU type 1 (Type 1) 00:04:54.639 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:54.639 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:54.898 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:54.898 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:54.898 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:54.898 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:54.898 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:54.898 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:55.464 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:04:55.464 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:55.722 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:55.722 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:55.722 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:55.722 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:55.722 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:55.722 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:55.722 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:59.004 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:04:59.004 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:04:59.004 Starting DPDK initialization... 00:04:59.004 Starting SPDK post initialization... 00:04:59.004 SPDK NVMe probe 00:04:59.004 Attaching to 0000:0b:00.0 00:04:59.004 Attached to 0000:0b:00.0 00:04:59.004 Cleaning up... 00:04:59.004 00:04:59.004 real 0m4.332s 00:04:59.004 user 0m2.952s 00:04:59.004 sys 0m0.443s 00:04:59.004 16:33:12 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.004 16:33:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.004 ************************************ 00:04:59.004 END TEST env_dpdk_post_init 00:04:59.004 ************************************ 00:04:59.004 16:33:12 env -- env/env.sh@26 -- # uname 00:04:59.004 16:33:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:59.004 16:33:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.004 16:33:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.004 16:33:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.004 16:33:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.004 ************************************ 00:04:59.004 START TEST env_mem_callbacks 00:04:59.004 ************************************ 00:04:59.004 16:33:12 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.004 EAL: Detected CPU lcores: 48 00:04:59.004 EAL: Detected NUMA nodes: 2 00:04:59.004 EAL: Detected shared linkage of DPDK 00:04:59.004 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.004 EAL: Selected IOVA mode 'VA' 00:04:59.004 EAL: VFIO support initialized 00:04:59.004 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.004 00:04:59.004 00:04:59.004 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.004 http://cunit.sourceforge.net/ 00:04:59.004 00:04:59.004 00:04:59.004 Suite: memory 00:04:59.004 Test: test ... 00:04:59.004 register 0x200000200000 2097152 00:04:59.004 malloc 3145728 00:04:59.004 register 0x200000400000 4194304 00:04:59.004 buf 0x200000500000 len 3145728 PASSED 00:04:59.004 malloc 64 00:04:59.004 buf 0x2000004fff40 len 64 PASSED 00:04:59.004 malloc 4194304 00:04:59.004 register 0x200000800000 6291456 00:04:59.004 buf 0x200000a00000 len 4194304 PASSED 00:04:59.004 free 0x200000500000 3145728 00:04:59.004 free 0x2000004fff40 64 00:04:59.004 unregister 0x200000400000 4194304 PASSED 00:04:59.004 free 0x200000a00000 4194304 00:04:59.004 unregister 0x200000800000 6291456 PASSED 00:04:59.004 malloc 8388608 00:04:59.004 register 0x200000400000 10485760 00:04:59.004 buf 0x200000600000 len 8388608 PASSED 00:04:59.004 free 0x200000600000 8388608 00:04:59.004 unregister 0x200000400000 10485760 PASSED 00:04:59.004 passed 00:04:59.004 00:04:59.004 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.004 suites 1 1 n/a 0 0 00:04:59.004 tests 1 1 1 0 0 00:04:59.004 asserts 15 15 15 0 n/a 00:04:59.004 00:04:59.004 Elapsed time = 0.005 seconds 00:04:59.004 00:04:59.004 real 0m0.049s 00:04:59.004 user 0m0.012s 00:04:59.004 sys 0m0.036s 00:04:59.004 16:33:12 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.004 16:33:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:59.004 ************************************ 00:04:59.004 END TEST env_mem_callbacks 00:04:59.004 ************************************ 00:04:59.004 00:04:59.004 real 0m6.360s 00:04:59.005 user 0m4.113s 00:04:59.005 sys 0m1.291s 00:04:59.005 16:33:12 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.005 16:33:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.005 ************************************ 00:04:59.005 END TEST env 00:04:59.005 ************************************ 00:04:59.005 16:33:12 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:59.005 16:33:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.005 16:33:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.005 16:33:12 -- common/autotest_common.sh@10 -- # set +x 00:04:59.005 ************************************ 00:04:59.005 START TEST rpc 00:04:59.005 ************************************ 00:04:59.005 16:33:12 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:59.263 * Looking for test storage... 00:04:59.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.263 16:33:12 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.263 16:33:12 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.263 16:33:12 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.263 16:33:12 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.263 16:33:12 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.263 16:33:12 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.263 16:33:12 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.263 16:33:12 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.263 16:33:12 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.263 16:33:12 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.263 16:33:12 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.263 16:33:12 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.263 16:33:12 rpc -- scripts/common.sh@345 -- # : 1 00:04:59.263 16:33:12 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.263 16:33:12 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.263 16:33:12 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.263 16:33:12 rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.263 16:33:12 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.263 16:33:12 rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.263 16:33:12 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.263 16:33:12 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:59.263 16:33:12 rpc -- scripts/common.sh@353 -- # local d=2 00:04:59.263 16:33:12 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.263 16:33:12 rpc -- scripts/common.sh@355 -- # echo 2 00:04:59.263 16:33:12 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.263 16:33:12 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.263 16:33:12 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.263 16:33:12 rpc -- scripts/common.sh@368 -- # return 0 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.263 --rc genhtml_branch_coverage=1 00:04:59.263 --rc genhtml_function_coverage=1 00:04:59.263 --rc genhtml_legend=1 00:04:59.263 --rc geninfo_all_blocks=1 00:04:59.263 --rc geninfo_unexecuted_blocks=1 00:04:59.263 00:04:59.263 ' 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.263 --rc genhtml_branch_coverage=1 00:04:59.263 --rc genhtml_function_coverage=1 00:04:59.263 --rc genhtml_legend=1 00:04:59.263 --rc geninfo_all_blocks=1 00:04:59.263 --rc geninfo_unexecuted_blocks=1 00:04:59.263 00:04:59.263 ' 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.263 --rc genhtml_branch_coverage=1 00:04:59.263 --rc genhtml_function_coverage=1 00:04:59.263 --rc genhtml_legend=1 00:04:59.263 --rc geninfo_all_blocks=1 00:04:59.263 --rc geninfo_unexecuted_blocks=1 00:04:59.263 00:04:59.263 ' 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.263 --rc genhtml_branch_coverage=1 00:04:59.263 --rc genhtml_function_coverage=1 00:04:59.263 --rc genhtml_legend=1 00:04:59.263 --rc geninfo_all_blocks=1 00:04:59.263 --rc geninfo_unexecuted_blocks=1 00:04:59.263 00:04:59.263 ' 00:04:59.263 16:33:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2226175 00:04:59.263 16:33:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:59.263 16:33:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.263 16:33:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2226175 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@831 -- # '[' -z 2226175 ']' 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.263 16:33:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.263 [2024-10-17 16:33:12.894379] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:04:59.263 [2024-10-17 16:33:12.894474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226175 ] 00:04:59.263 [2024-10-17 16:33:12.950361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.522 [2024-10-17 16:33:13.007613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:59.522 [2024-10-17 16:33:13.007669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2226175' to capture a snapshot of events at runtime. 00:04:59.522 [2024-10-17 16:33:13.007696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:59.522 [2024-10-17 16:33:13.007712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:59.522 [2024-10-17 16:33:13.007722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2226175 for offline analysis/debug. 00:04:59.522 [2024-10-17 16:33:13.008341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.781 16:33:13 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.781 16:33:13 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:59.781 16:33:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.781 16:33:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.781 16:33:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:59.781 16:33:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:59.781 16:33:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.781 16:33:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.781 16:33:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.781 ************************************ 00:04:59.781 START TEST rpc_integrity 00:04:59.781 ************************************ 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.781 { 00:04:59.781 "name": "Malloc0", 00:04:59.781 "aliases": [ 00:04:59.781 "72ec1efb-4f55-494b-9ca0-9b2c7463aa26" 00:04:59.781 ], 00:04:59.781 "product_name": "Malloc disk", 00:04:59.781 "block_size": 512, 00:04:59.781 "num_blocks": 16384, 00:04:59.781 "uuid": "72ec1efb-4f55-494b-9ca0-9b2c7463aa26", 00:04:59.781 "assigned_rate_limits": { 00:04:59.781 "rw_ios_per_sec": 0, 00:04:59.781 "rw_mbytes_per_sec": 0, 00:04:59.781 "r_mbytes_per_sec": 0, 00:04:59.781 "w_mbytes_per_sec": 0 00:04:59.781 }, 00:04:59.781 "claimed": false, 00:04:59.781 "zoned": false, 00:04:59.781 "supported_io_types": { 00:04:59.781 "read": true, 00:04:59.781 "write": true, 00:04:59.781 "unmap": true, 00:04:59.781 "flush": true, 00:04:59.781 "reset": true, 00:04:59.781 "nvme_admin": false, 00:04:59.781 "nvme_io": false, 00:04:59.781 "nvme_io_md": false, 00:04:59.781 "write_zeroes": true, 00:04:59.781 "zcopy": true, 00:04:59.781 "get_zone_info": false, 00:04:59.781 "zone_management": false, 00:04:59.781 "zone_append": false, 00:04:59.781 "compare": false, 00:04:59.781 "compare_and_write": false, 00:04:59.781 "abort": true, 00:04:59.781 "seek_hole": false, 00:04:59.781 "seek_data": false, 00:04:59.781 "copy": true, 00:04:59.781 "nvme_iov_md": false 00:04:59.781 }, 00:04:59.781 "memory_domains": [ 00:04:59.781 { 00:04:59.781 "dma_device_id": "system", 00:04:59.781 "dma_device_type": 1 00:04:59.781 }, 00:04:59.781 { 00:04:59.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.781 "dma_device_type": 2 00:04:59.781 } 00:04:59.781 ], 00:04:59.781 "driver_specific": {} 00:04:59.781 } 00:04:59.781 ]' 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.781 [2024-10-17 16:33:13.415908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:59.781 [2024-10-17 16:33:13.415952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.781 [2024-10-17 16:33:13.415976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1793800 00:04:59.781 [2024-10-17 16:33:13.415992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.781 [2024-10-17 16:33:13.417551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.781 [2024-10-17 16:33:13.417579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.781 Passthru0 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.781 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.781 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.781 { 00:04:59.781 "name": "Malloc0", 00:04:59.781 "aliases": [ 00:04:59.781 "72ec1efb-4f55-494b-9ca0-9b2c7463aa26" 00:04:59.781 ], 00:04:59.781 "product_name": "Malloc disk", 00:04:59.781 "block_size": 512, 00:04:59.781 "num_blocks": 16384, 00:04:59.781 "uuid": "72ec1efb-4f55-494b-9ca0-9b2c7463aa26", 00:04:59.781 "assigned_rate_limits": { 00:04:59.781 "rw_ios_per_sec": 0, 00:04:59.781 "rw_mbytes_per_sec": 0, 00:04:59.781 "r_mbytes_per_sec": 0, 00:04:59.781 "w_mbytes_per_sec": 0 00:04:59.781 }, 00:04:59.781 "claimed": true, 00:04:59.781 "claim_type": "exclusive_write", 00:04:59.781 "zoned": false, 00:04:59.781 "supported_io_types": { 00:04:59.781 "read": true, 00:04:59.781 "write": true, 00:04:59.781 "unmap": true, 00:04:59.781 "flush": true, 00:04:59.781 "reset": true, 00:04:59.781 "nvme_admin": false, 00:04:59.781 "nvme_io": false, 00:04:59.781 "nvme_io_md": false, 00:04:59.781 "write_zeroes": true, 00:04:59.781 "zcopy": true, 00:04:59.781 "get_zone_info": false, 00:04:59.781 "zone_management": false, 00:04:59.781 "zone_append": false, 00:04:59.781 "compare": false, 00:04:59.781 "compare_and_write": false, 00:04:59.781 "abort": true, 00:04:59.781 "seek_hole": false, 00:04:59.781 "seek_data": false, 00:04:59.781 "copy": true, 00:04:59.781 "nvme_iov_md": false 00:04:59.781 }, 00:04:59.781 "memory_domains": [ 00:04:59.781 { 00:04:59.781 "dma_device_id": "system", 00:04:59.781 "dma_device_type": 1 00:04:59.781 }, 00:04:59.781 { 00:04:59.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.782 "dma_device_type": 2 00:04:59.782 } 00:04:59.782 ], 00:04:59.782 "driver_specific": {} 00:04:59.782 }, 00:04:59.782 { 00:04:59.782 "name": "Passthru0", 00:04:59.782 "aliases": [ 00:04:59.782 "62d3dc52-d21f-5a3b-9777-a0101ddc54bf" 00:04:59.782 ], 00:04:59.782 "product_name": "passthru", 00:04:59.782 "block_size": 512, 00:04:59.782 "num_blocks": 16384, 00:04:59.782 "uuid": "62d3dc52-d21f-5a3b-9777-a0101ddc54bf", 00:04:59.782 "assigned_rate_limits": { 00:04:59.782 "rw_ios_per_sec": 0, 00:04:59.782 "rw_mbytes_per_sec": 0, 00:04:59.782 "r_mbytes_per_sec": 0, 00:04:59.782 "w_mbytes_per_sec": 0 00:04:59.782 }, 00:04:59.782 "claimed": false, 00:04:59.782 "zoned": false, 00:04:59.782 "supported_io_types": { 00:04:59.782 "read": true, 00:04:59.782 "write": true, 00:04:59.782 "unmap": true, 00:04:59.782 "flush": true, 00:04:59.782 "reset": true, 00:04:59.782 "nvme_admin": false, 00:04:59.782 "nvme_io": false, 00:04:59.782 "nvme_io_md": false, 00:04:59.782 "write_zeroes": true, 00:04:59.782 "zcopy": true, 00:04:59.782 "get_zone_info": false, 00:04:59.782 "zone_management": false, 00:04:59.782 "zone_append": false, 00:04:59.782 "compare": false, 00:04:59.782 "compare_and_write": false, 00:04:59.782 "abort": true, 00:04:59.782 "seek_hole": false, 00:04:59.782 "seek_data": false, 00:04:59.782 "copy": true, 00:04:59.782 "nvme_iov_md": false 00:04:59.782 }, 00:04:59.782 "memory_domains": [ 00:04:59.782 { 00:04:59.782 "dma_device_id": "system", 00:04:59.782 "dma_device_type": 1 00:04:59.782 }, 00:04:59.782 { 00:04:59.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.782 "dma_device_type": 2 00:04:59.782 } 00:04:59.782 ], 00:04:59.782 "driver_specific": { 00:04:59.782 "passthru": { 00:04:59.782 "name": "Passthru0", 00:04:59.782 "base_bdev_name": "Malloc0" 00:04:59.782 } 00:04:59.782 } 00:04:59.782 } 00:04:59.782 ]' 00:04:59.782 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.040 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.040 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.040 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.040 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.040 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.040 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:00.040 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.040 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.040 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.040 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.040 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.040 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.040 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.040 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.040 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:00.040 16:33:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.040 00:05:00.040 real 0m0.231s 00:05:00.040 user 0m0.155s 00:05:00.040 sys 0m0.020s 00:05:00.040 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.040 16:33:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.040 ************************************ 00:05:00.040 END TEST rpc_integrity 00:05:00.040 ************************************ 00:05:00.040 16:33:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:00.040 16:33:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.040 16:33:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.040 16:33:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.040 ************************************ 00:05:00.040 START TEST rpc_plugins 00:05:00.041 ************************************ 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:00.041 16:33:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.041 16:33:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:00.041 16:33:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.041 16:33:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:00.041 { 00:05:00.041 "name": "Malloc1", 00:05:00.041 "aliases": [ 00:05:00.041 "df1ffd9c-cec2-43ee-8f5c-72918d0913d0" 00:05:00.041 ], 00:05:00.041 "product_name": "Malloc disk", 00:05:00.041 "block_size": 4096, 00:05:00.041 "num_blocks": 256, 00:05:00.041 "uuid": "df1ffd9c-cec2-43ee-8f5c-72918d0913d0", 00:05:00.041 "assigned_rate_limits": { 00:05:00.041 "rw_ios_per_sec": 0, 00:05:00.041 "rw_mbytes_per_sec": 0, 00:05:00.041 "r_mbytes_per_sec": 0, 00:05:00.041 "w_mbytes_per_sec": 0 00:05:00.041 }, 00:05:00.041 "claimed": false, 00:05:00.041 "zoned": false, 00:05:00.041 "supported_io_types": { 00:05:00.041 "read": true, 00:05:00.041 "write": true, 00:05:00.041 "unmap": true, 00:05:00.041 "flush": true, 00:05:00.041 "reset": true, 00:05:00.041 "nvme_admin": false, 00:05:00.041 "nvme_io": false, 00:05:00.041 "nvme_io_md": false, 00:05:00.041 "write_zeroes": true, 00:05:00.041 "zcopy": true, 00:05:00.041 "get_zone_info": false, 00:05:00.041 "zone_management": false, 00:05:00.041 "zone_append": false, 00:05:00.041 "compare": false, 00:05:00.041 "compare_and_write": false, 00:05:00.041 "abort": true, 00:05:00.041 "seek_hole": false, 00:05:00.041 "seek_data": false, 00:05:00.041 "copy": true, 00:05:00.041 "nvme_iov_md": false 00:05:00.041 }, 00:05:00.041 "memory_domains": [ 00:05:00.041 { 00:05:00.041 "dma_device_id": "system", 00:05:00.041 "dma_device_type": 1 00:05:00.041 }, 00:05:00.041 { 00:05:00.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.041 "dma_device_type": 2 00:05:00.041 } 00:05:00.041 ], 00:05:00.041 "driver_specific": {} 00:05:00.041 } 00:05:00.041 ]' 00:05:00.041 16:33:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:00.041 16:33:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:00.041 16:33:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.041 16:33:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.041 16:33:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:00.041 16:33:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:00.041 16:33:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:00.041 00:05:00.041 real 0m0.117s 00:05:00.041 user 0m0.076s 00:05:00.041 sys 0m0.011s 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.041 16:33:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.041 ************************************ 00:05:00.041 END TEST rpc_plugins 00:05:00.041 ************************************ 00:05:00.041 16:33:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:00.041 16:33:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.041 16:33:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.041 16:33:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.299 ************************************ 00:05:00.299 START TEST rpc_trace_cmd_test 00:05:00.299 ************************************ 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:00.299 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2226175", 00:05:00.299 "tpoint_group_mask": "0x8", 00:05:00.299 "iscsi_conn": { 00:05:00.299 "mask": "0x2", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "scsi": { 00:05:00.299 "mask": "0x4", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "bdev": { 00:05:00.299 "mask": "0x8", 00:05:00.299 "tpoint_mask": "0xffffffffffffffff" 00:05:00.299 }, 00:05:00.299 "nvmf_rdma": { 00:05:00.299 "mask": "0x10", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "nvmf_tcp": { 00:05:00.299 "mask": "0x20", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "ftl": { 00:05:00.299 "mask": "0x40", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "blobfs": { 00:05:00.299 "mask": "0x80", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "dsa": { 00:05:00.299 "mask": "0x200", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "thread": { 00:05:00.299 "mask": "0x400", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "nvme_pcie": { 00:05:00.299 "mask": "0x800", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "iaa": { 00:05:00.299 "mask": "0x1000", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "nvme_tcp": { 00:05:00.299 "mask": "0x2000", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "bdev_nvme": { 00:05:00.299 "mask": "0x4000", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "sock": { 00:05:00.299 "mask": "0x8000", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "blob": { 00:05:00.299 "mask": "0x10000", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "bdev_raid": { 00:05:00.299 "mask": "0x20000", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 }, 00:05:00.299 "scheduler": { 00:05:00.299 "mask": "0x40000", 00:05:00.299 "tpoint_mask": "0x0" 00:05:00.299 } 00:05:00.299 }' 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:00.299 00:05:00.299 real 0m0.204s 00:05:00.299 user 0m0.177s 00:05:00.299 sys 0m0.018s 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.299 16:33:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.299 ************************************ 00:05:00.299 END TEST rpc_trace_cmd_test 00:05:00.299 ************************************ 00:05:00.299 16:33:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:00.299 16:33:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:00.299 16:33:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:00.299 16:33:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.299 16:33:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.300 16:33:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.558 ************************************ 00:05:00.558 START TEST rpc_daemon_integrity 00:05:00.558 ************************************ 00:05:00.558 16:33:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:00.558 16:33:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.558 16:33:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.558 16:33:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.558 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.558 { 00:05:00.558 "name": "Malloc2", 00:05:00.558 "aliases": [ 00:05:00.558 "a2d70401-bd4f-47c9-9654-16d9299da0cd" 00:05:00.558 ], 00:05:00.558 "product_name": "Malloc disk", 00:05:00.558 "block_size": 512, 00:05:00.558 "num_blocks": 16384, 00:05:00.558 "uuid": "a2d70401-bd4f-47c9-9654-16d9299da0cd", 00:05:00.558 "assigned_rate_limits": { 00:05:00.558 "rw_ios_per_sec": 0, 00:05:00.558 "rw_mbytes_per_sec": 0, 00:05:00.558 "r_mbytes_per_sec": 0, 00:05:00.558 "w_mbytes_per_sec": 0 00:05:00.558 }, 00:05:00.558 "claimed": false, 00:05:00.558 "zoned": false, 00:05:00.558 "supported_io_types": { 00:05:00.558 "read": true, 00:05:00.558 "write": true, 00:05:00.558 "unmap": true, 00:05:00.558 "flush": true, 00:05:00.558 "reset": true, 00:05:00.558 "nvme_admin": false, 00:05:00.558 "nvme_io": false, 00:05:00.558 "nvme_io_md": false, 00:05:00.558 "write_zeroes": true, 00:05:00.558 "zcopy": true, 00:05:00.558 "get_zone_info": false, 00:05:00.558 "zone_management": false, 00:05:00.558 "zone_append": false, 00:05:00.558 "compare": false, 00:05:00.558 "compare_and_write": false, 00:05:00.558 "abort": true, 00:05:00.558 "seek_hole": false, 00:05:00.558 "seek_data": false, 00:05:00.558 "copy": true, 00:05:00.558 "nvme_iov_md": false 00:05:00.558 }, 00:05:00.558 "memory_domains": [ 00:05:00.559 { 00:05:00.559 "dma_device_id": "system", 00:05:00.559 "dma_device_type": 1 00:05:00.559 }, 00:05:00.559 { 00:05:00.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.559 "dma_device_type": 2 00:05:00.559 } 00:05:00.559 ], 00:05:00.559 "driver_specific": {} 00:05:00.559 } 00:05:00.559 ]' 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.559 [2024-10-17 16:33:14.110691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:00.559 [2024-10-17 16:33:14.110735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.559 [2024-10-17 16:33:14.110762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1793de0 00:05:00.559 [2024-10-17 16:33:14.110778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.559 [2024-10-17 16:33:14.112155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.559 [2024-10-17 16:33:14.112182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.559 Passthru0 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.559 { 00:05:00.559 "name": "Malloc2", 00:05:00.559 "aliases": [ 00:05:00.559 "a2d70401-bd4f-47c9-9654-16d9299da0cd" 00:05:00.559 ], 00:05:00.559 "product_name": "Malloc disk", 00:05:00.559 "block_size": 512, 00:05:00.559 "num_blocks": 16384, 00:05:00.559 "uuid": "a2d70401-bd4f-47c9-9654-16d9299da0cd", 00:05:00.559 "assigned_rate_limits": { 00:05:00.559 "rw_ios_per_sec": 0, 00:05:00.559 "rw_mbytes_per_sec": 0, 00:05:00.559 "r_mbytes_per_sec": 0, 00:05:00.559 "w_mbytes_per_sec": 0 00:05:00.559 }, 00:05:00.559 "claimed": true, 00:05:00.559 "claim_type": "exclusive_write", 00:05:00.559 "zoned": false, 00:05:00.559 "supported_io_types": { 00:05:00.559 "read": true, 00:05:00.559 "write": true, 00:05:00.559 "unmap": true, 00:05:00.559 "flush": true, 00:05:00.559 "reset": true, 00:05:00.559 "nvme_admin": false, 00:05:00.559 "nvme_io": false, 00:05:00.559 "nvme_io_md": false, 00:05:00.559 "write_zeroes": true, 00:05:00.559 "zcopy": true, 00:05:00.559 "get_zone_info": false, 00:05:00.559 "zone_management": false, 00:05:00.559 "zone_append": false, 00:05:00.559 "compare": false, 00:05:00.559 "compare_and_write": false, 00:05:00.559 "abort": true, 00:05:00.559 "seek_hole": false, 00:05:00.559 "seek_data": false, 00:05:00.559 "copy": true, 00:05:00.559 "nvme_iov_md": false 00:05:00.559 }, 00:05:00.559 "memory_domains": [ 00:05:00.559 { 00:05:00.559 "dma_device_id": "system", 00:05:00.559 "dma_device_type": 1 00:05:00.559 }, 00:05:00.559 { 00:05:00.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.559 "dma_device_type": 2 00:05:00.559 } 00:05:00.559 ], 00:05:00.559 "driver_specific": {} 00:05:00.559 }, 00:05:00.559 { 00:05:00.559 "name": "Passthru0", 00:05:00.559 "aliases": [ 00:05:00.559 "3e1c4f2b-a7a7-50bc-8a12-07e85734bb74" 00:05:00.559 ], 00:05:00.559 "product_name": "passthru", 00:05:00.559 "block_size": 512, 00:05:00.559 "num_blocks": 16384, 00:05:00.559 "uuid": "3e1c4f2b-a7a7-50bc-8a12-07e85734bb74", 00:05:00.559 "assigned_rate_limits": { 00:05:00.559 "rw_ios_per_sec": 0, 00:05:00.559 "rw_mbytes_per_sec": 0, 00:05:00.559 "r_mbytes_per_sec": 0, 00:05:00.559 "w_mbytes_per_sec": 0 00:05:00.559 }, 00:05:00.559 "claimed": false, 00:05:00.559 "zoned": false, 00:05:00.559 "supported_io_types": { 00:05:00.559 "read": true, 00:05:00.559 "write": true, 00:05:00.559 "unmap": true, 00:05:00.559 "flush": true, 00:05:00.559 "reset": true, 00:05:00.559 "nvme_admin": false, 00:05:00.559 "nvme_io": false, 00:05:00.559 "nvme_io_md": false, 00:05:00.559 "write_zeroes": true, 00:05:00.559 "zcopy": true, 00:05:00.559 "get_zone_info": false, 00:05:00.559 "zone_management": false, 00:05:00.559 "zone_append": false, 00:05:00.559 "compare": false, 00:05:00.559 "compare_and_write": false, 00:05:00.559 "abort": true, 00:05:00.559 "seek_hole": false, 00:05:00.559 "seek_data": false, 00:05:00.559 "copy": true, 00:05:00.559 "nvme_iov_md": false 00:05:00.559 }, 00:05:00.559 "memory_domains": [ 00:05:00.559 { 00:05:00.559 "dma_device_id": "system", 00:05:00.559 "dma_device_type": 1 00:05:00.559 }, 00:05:00.559 { 00:05:00.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.559 "dma_device_type": 2 00:05:00.559 } 00:05:00.559 ], 00:05:00.559 "driver_specific": { 00:05:00.559 "passthru": { 00:05:00.559 "name": "Passthru0", 00:05:00.559 "base_bdev_name": "Malloc2" 00:05:00.559 } 00:05:00.559 } 00:05:00.559 } 00:05:00.559 ]' 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.559 00:05:00.559 real 0m0.235s 00:05:00.559 user 0m0.157s 00:05:00.559 sys 0m0.020s 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.559 16:33:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.559 ************************************ 00:05:00.559 END TEST rpc_daemon_integrity 00:05:00.559 ************************************ 00:05:00.818 16:33:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:00.818 16:33:14 rpc -- rpc/rpc.sh@84 -- # killprocess 2226175 00:05:00.818 16:33:14 rpc -- common/autotest_common.sh@950 -- # '[' -z 2226175 ']' 00:05:00.818 16:33:14 rpc -- common/autotest_common.sh@954 -- # kill -0 2226175 00:05:00.818 16:33:14 rpc -- common/autotest_common.sh@955 -- # uname 00:05:00.818 16:33:14 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.818 16:33:14 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2226175 00:05:00.818 16:33:14 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.818 16:33:14 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.818 16:33:14 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2226175' 00:05:00.818 killing process with pid 2226175 00:05:00.818 16:33:14 rpc -- common/autotest_common.sh@969 -- # kill 2226175 00:05:00.818 16:33:14 rpc -- common/autotest_common.sh@974 -- # wait 2226175 00:05:01.077 00:05:01.077 real 0m2.034s 00:05:01.077 user 0m2.543s 00:05:01.077 sys 0m0.615s 00:05:01.077 16:33:14 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.077 16:33:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.077 ************************************ 00:05:01.077 END TEST rpc 00:05:01.077 ************************************ 00:05:01.077 16:33:14 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.077 16:33:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.077 16:33:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.077 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:05:01.336 ************************************ 00:05:01.336 START TEST skip_rpc 00:05:01.336 ************************************ 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.336 * Looking for test storage... 00:05:01.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.336 16:33:14 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.336 --rc genhtml_branch_coverage=1 00:05:01.336 --rc genhtml_function_coverage=1 00:05:01.336 --rc genhtml_legend=1 00:05:01.336 --rc geninfo_all_blocks=1 00:05:01.336 --rc geninfo_unexecuted_blocks=1 00:05:01.336 00:05:01.336 ' 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.336 --rc genhtml_branch_coverage=1 00:05:01.336 --rc genhtml_function_coverage=1 00:05:01.336 --rc genhtml_legend=1 00:05:01.336 --rc geninfo_all_blocks=1 00:05:01.336 --rc geninfo_unexecuted_blocks=1 00:05:01.336 00:05:01.336 ' 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.336 --rc genhtml_branch_coverage=1 00:05:01.336 --rc genhtml_function_coverage=1 00:05:01.336 --rc genhtml_legend=1 00:05:01.336 --rc geninfo_all_blocks=1 00:05:01.336 --rc geninfo_unexecuted_blocks=1 00:05:01.336 00:05:01.336 ' 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.336 --rc genhtml_branch_coverage=1 00:05:01.336 --rc genhtml_function_coverage=1 00:05:01.336 --rc genhtml_legend=1 00:05:01.336 --rc geninfo_all_blocks=1 00:05:01.336 --rc geninfo_unexecuted_blocks=1 00:05:01.336 00:05:01.336 ' 00:05:01.336 16:33:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:01.336 16:33:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:01.336 16:33:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.336 16:33:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.336 ************************************ 00:05:01.336 START TEST skip_rpc 00:05:01.336 ************************************ 00:05:01.336 16:33:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:01.336 16:33:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2226527 00:05:01.336 16:33:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.336 16:33:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:01.336 16:33:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:01.336 [2024-10-17 16:33:15.005232] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:01.336 [2024-10-17 16:33:15.005334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226527 ] 00:05:01.595 [2024-10-17 16:33:15.068451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.595 [2024-10-17 16:33:15.134998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2226527 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2226527 ']' 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2226527 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2226527 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.858 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2226527' 00:05:06.858 killing process with pid 2226527 00:05:06.859 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2226527 00:05:06.859 16:33:19 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2226527 00:05:06.859 00:05:06.859 real 0m5.466s 00:05:06.859 user 0m5.141s 00:05:06.859 sys 0m0.341s 00:05:06.859 16:33:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.859 16:33:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.859 ************************************ 00:05:06.859 END TEST skip_rpc 00:05:06.859 ************************************ 00:05:06.859 16:33:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:06.859 16:33:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.859 16:33:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.859 16:33:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.859 ************************************ 00:05:06.859 START TEST skip_rpc_with_json 00:05:06.859 ************************************ 00:05:06.859 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:06.859 16:33:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:06.859 16:33:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2227220 00:05:06.859 16:33:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.859 16:33:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.859 16:33:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2227220 00:05:06.859 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2227220 ']' 00:05:06.859 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.859 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.859 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.859 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.859 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.859 [2024-10-17 16:33:20.524064] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:06.859 [2024-10-17 16:33:20.524166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227220 ] 00:05:07.117 [2024-10-17 16:33:20.585191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.117 [2024-10-17 16:33:20.646784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.376 [2024-10-17 16:33:20.928586] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:07.376 request: 00:05:07.376 { 00:05:07.376 "trtype": "tcp", 00:05:07.376 "method": "nvmf_get_transports", 00:05:07.376 "req_id": 1 00:05:07.376 } 00:05:07.376 Got JSON-RPC error response 00:05:07.376 response: 00:05:07.376 { 00:05:07.376 "code": -19, 00:05:07.376 "message": "No such device" 00:05:07.376 } 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.376 [2024-10-17 16:33:20.936707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.376 16:33:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.635 16:33:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.635 16:33:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:07.635 { 00:05:07.635 "subsystems": [ 00:05:07.635 { 00:05:07.635 "subsystem": "fsdev", 00:05:07.635 "config": [ 00:05:07.635 { 00:05:07.635 "method": "fsdev_set_opts", 00:05:07.635 "params": { 00:05:07.635 "fsdev_io_pool_size": 65535, 00:05:07.635 "fsdev_io_cache_size": 256 00:05:07.635 } 00:05:07.635 } 00:05:07.635 ] 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "subsystem": "vfio_user_target", 00:05:07.635 "config": null 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "subsystem": "keyring", 00:05:07.635 "config": [] 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "subsystem": "iobuf", 00:05:07.635 "config": [ 00:05:07.635 { 00:05:07.635 "method": "iobuf_set_options", 00:05:07.635 "params": { 00:05:07.635 "small_pool_count": 8192, 00:05:07.635 "large_pool_count": 1024, 00:05:07.635 "small_bufsize": 8192, 00:05:07.635 "large_bufsize": 135168 00:05:07.635 } 00:05:07.635 } 00:05:07.635 ] 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "subsystem": "sock", 00:05:07.635 "config": [ 00:05:07.635 { 00:05:07.635 "method": "sock_set_default_impl", 00:05:07.635 "params": { 00:05:07.635 "impl_name": "posix" 00:05:07.635 } 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "method": "sock_impl_set_options", 00:05:07.635 "params": { 00:05:07.635 "impl_name": "ssl", 00:05:07.635 "recv_buf_size": 4096, 00:05:07.635 "send_buf_size": 4096, 00:05:07.635 "enable_recv_pipe": true, 00:05:07.635 "enable_quickack": false, 00:05:07.635 "enable_placement_id": 0, 00:05:07.635 "enable_zerocopy_send_server": true, 00:05:07.635 "enable_zerocopy_send_client": false, 00:05:07.635 "zerocopy_threshold": 0, 00:05:07.635 "tls_version": 0, 00:05:07.635 "enable_ktls": false 00:05:07.635 } 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "method": "sock_impl_set_options", 00:05:07.635 "params": { 00:05:07.635 "impl_name": "posix", 00:05:07.635 "recv_buf_size": 2097152, 00:05:07.635 "send_buf_size": 2097152, 00:05:07.635 "enable_recv_pipe": true, 00:05:07.635 "enable_quickack": false, 00:05:07.635 "enable_placement_id": 0, 00:05:07.635 "enable_zerocopy_send_server": true, 00:05:07.635 "enable_zerocopy_send_client": false, 00:05:07.635 "zerocopy_threshold": 0, 00:05:07.635 "tls_version": 0, 00:05:07.635 "enable_ktls": false 00:05:07.635 } 00:05:07.635 } 00:05:07.635 ] 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "subsystem": "vmd", 00:05:07.635 "config": [] 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "subsystem": "accel", 00:05:07.635 "config": [ 00:05:07.635 { 00:05:07.635 "method": "accel_set_options", 00:05:07.635 "params": { 00:05:07.635 "small_cache_size": 128, 00:05:07.635 "large_cache_size": 16, 00:05:07.635 "task_count": 2048, 00:05:07.635 "sequence_count": 2048, 00:05:07.635 "buf_count": 2048 00:05:07.635 } 00:05:07.635 } 00:05:07.635 ] 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "subsystem": "bdev", 00:05:07.635 "config": [ 00:05:07.635 { 00:05:07.635 "method": "bdev_set_options", 00:05:07.635 "params": { 00:05:07.635 "bdev_io_pool_size": 65535, 00:05:07.635 "bdev_io_cache_size": 256, 00:05:07.635 "bdev_auto_examine": true, 00:05:07.635 "iobuf_small_cache_size": 128, 00:05:07.635 "iobuf_large_cache_size": 16 00:05:07.635 } 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "method": "bdev_raid_set_options", 00:05:07.635 "params": { 00:05:07.635 "process_window_size_kb": 1024, 00:05:07.635 "process_max_bandwidth_mb_sec": 0 00:05:07.635 } 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "method": "bdev_iscsi_set_options", 00:05:07.635 "params": { 00:05:07.635 "timeout_sec": 30 00:05:07.635 } 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "method": "bdev_nvme_set_options", 00:05:07.635 "params": { 00:05:07.635 "action_on_timeout": "none", 00:05:07.635 "timeout_us": 0, 00:05:07.635 "timeout_admin_us": 0, 00:05:07.635 "keep_alive_timeout_ms": 10000, 00:05:07.635 "arbitration_burst": 0, 00:05:07.635 "low_priority_weight": 0, 00:05:07.635 "medium_priority_weight": 0, 00:05:07.635 "high_priority_weight": 0, 00:05:07.635 "nvme_adminq_poll_period_us": 10000, 00:05:07.635 "nvme_ioq_poll_period_us": 0, 00:05:07.635 "io_queue_requests": 0, 00:05:07.635 "delay_cmd_submit": true, 00:05:07.635 "transport_retry_count": 4, 00:05:07.635 "bdev_retry_count": 3, 00:05:07.635 "transport_ack_timeout": 0, 00:05:07.635 "ctrlr_loss_timeout_sec": 0, 00:05:07.635 "reconnect_delay_sec": 0, 00:05:07.635 "fast_io_fail_timeout_sec": 0, 00:05:07.635 "disable_auto_failback": false, 00:05:07.635 "generate_uuids": false, 00:05:07.635 "transport_tos": 0, 00:05:07.635 "nvme_error_stat": false, 00:05:07.635 "rdma_srq_size": 0, 00:05:07.635 "io_path_stat": false, 00:05:07.635 "allow_accel_sequence": false, 00:05:07.635 "rdma_max_cq_size": 0, 00:05:07.635 "rdma_cm_event_timeout_ms": 0, 00:05:07.635 "dhchap_digests": [ 00:05:07.635 "sha256", 00:05:07.635 "sha384", 00:05:07.635 "sha512" 00:05:07.635 ], 00:05:07.635 "dhchap_dhgroups": [ 00:05:07.635 "null", 00:05:07.635 "ffdhe2048", 00:05:07.635 "ffdhe3072", 00:05:07.635 "ffdhe4096", 00:05:07.635 "ffdhe6144", 00:05:07.635 "ffdhe8192" 00:05:07.635 ] 00:05:07.635 } 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "method": "bdev_nvme_set_hotplug", 00:05:07.635 "params": { 00:05:07.635 "period_us": 100000, 00:05:07.635 "enable": false 00:05:07.635 } 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "method": "bdev_wait_for_examine" 00:05:07.635 } 00:05:07.635 ] 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "subsystem": "scsi", 00:05:07.635 "config": null 00:05:07.635 }, 00:05:07.635 { 00:05:07.635 "subsystem": "scheduler", 00:05:07.635 "config": [ 00:05:07.635 { 00:05:07.635 "method": "framework_set_scheduler", 00:05:07.635 "params": { 00:05:07.635 "name": "static" 00:05:07.635 } 00:05:07.635 } 00:05:07.635 ] 00:05:07.636 }, 00:05:07.636 { 00:05:07.636 "subsystem": "vhost_scsi", 00:05:07.636 "config": [] 00:05:07.636 }, 00:05:07.636 { 00:05:07.636 "subsystem": "vhost_blk", 00:05:07.636 "config": [] 00:05:07.636 }, 00:05:07.636 { 00:05:07.636 "subsystem": "ublk", 00:05:07.636 "config": [] 00:05:07.636 }, 00:05:07.636 { 00:05:07.636 "subsystem": "nbd", 00:05:07.636 "config": [] 00:05:07.636 }, 00:05:07.636 { 00:05:07.636 "subsystem": "nvmf", 00:05:07.636 "config": [ 00:05:07.636 { 00:05:07.636 "method": "nvmf_set_config", 00:05:07.636 "params": { 00:05:07.636 "discovery_filter": "match_any", 00:05:07.636 "admin_cmd_passthru": { 00:05:07.636 "identify_ctrlr": false 00:05:07.636 }, 00:05:07.636 "dhchap_digests": [ 00:05:07.636 "sha256", 00:05:07.636 "sha384", 00:05:07.636 "sha512" 00:05:07.636 ], 00:05:07.636 "dhchap_dhgroups": [ 00:05:07.636 "null", 00:05:07.636 "ffdhe2048", 00:05:07.636 "ffdhe3072", 00:05:07.636 "ffdhe4096", 00:05:07.636 "ffdhe6144", 00:05:07.636 "ffdhe8192" 00:05:07.636 ] 00:05:07.636 } 00:05:07.636 }, 00:05:07.636 { 00:05:07.636 "method": "nvmf_set_max_subsystems", 00:05:07.636 "params": { 00:05:07.636 "max_subsystems": 1024 00:05:07.636 } 00:05:07.636 }, 00:05:07.636 { 00:05:07.636 "method": "nvmf_set_crdt", 00:05:07.636 "params": { 00:05:07.636 "crdt1": 0, 00:05:07.636 "crdt2": 0, 00:05:07.636 "crdt3": 0 00:05:07.636 } 00:05:07.636 }, 00:05:07.636 { 00:05:07.636 "method": "nvmf_create_transport", 00:05:07.636 "params": { 00:05:07.636 "trtype": "TCP", 00:05:07.636 "max_queue_depth": 128, 00:05:07.636 "max_io_qpairs_per_ctrlr": 127, 00:05:07.636 "in_capsule_data_size": 4096, 00:05:07.636 "max_io_size": 131072, 00:05:07.636 "io_unit_size": 131072, 00:05:07.636 "max_aq_depth": 128, 00:05:07.636 "num_shared_buffers": 511, 00:05:07.636 "buf_cache_size": 4294967295, 00:05:07.636 "dif_insert_or_strip": false, 00:05:07.636 "zcopy": false, 00:05:07.636 "c2h_success": true, 00:05:07.636 "sock_priority": 0, 00:05:07.636 "abort_timeout_sec": 1, 00:05:07.636 "ack_timeout": 0, 00:05:07.636 "data_wr_pool_size": 0 00:05:07.636 } 00:05:07.636 } 00:05:07.636 ] 00:05:07.636 }, 00:05:07.636 { 00:05:07.636 "subsystem": "iscsi", 00:05:07.636 "config": [ 00:05:07.636 { 00:05:07.636 "method": "iscsi_set_options", 00:05:07.636 "params": { 00:05:07.636 "node_base": "iqn.2016-06.io.spdk", 00:05:07.636 "max_sessions": 128, 00:05:07.636 "max_connections_per_session": 2, 00:05:07.636 "max_queue_depth": 64, 00:05:07.636 "default_time2wait": 2, 00:05:07.636 "default_time2retain": 20, 00:05:07.636 "first_burst_length": 8192, 00:05:07.636 "immediate_data": true, 00:05:07.636 "allow_duplicated_isid": false, 00:05:07.636 "error_recovery_level": 0, 00:05:07.636 "nop_timeout": 60, 00:05:07.636 "nop_in_interval": 30, 00:05:07.636 "disable_chap": false, 00:05:07.636 "require_chap": false, 00:05:07.636 "mutual_chap": false, 00:05:07.636 "chap_group": 0, 00:05:07.636 "max_large_datain_per_connection": 64, 00:05:07.636 "max_r2t_per_connection": 4, 00:05:07.636 "pdu_pool_size": 36864, 00:05:07.636 "immediate_data_pool_size": 16384, 00:05:07.636 "data_out_pool_size": 2048 00:05:07.636 } 00:05:07.636 } 00:05:07.636 ] 00:05:07.636 } 00:05:07.636 ] 00:05:07.636 } 00:05:07.636 16:33:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:07.636 16:33:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2227220 00:05:07.636 16:33:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2227220 ']' 00:05:07.636 16:33:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2227220 00:05:07.636 16:33:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:07.636 16:33:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.636 16:33:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2227220 00:05:07.636 16:33:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.636 16:33:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.636 16:33:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2227220' 00:05:07.636 killing process with pid 2227220 00:05:07.636 16:33:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2227220 00:05:07.636 16:33:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2227220 00:05:07.894 16:33:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2227360 00:05:07.894 16:33:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:07.894 16:33:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:13.184 16:33:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2227360 00:05:13.184 16:33:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2227360 ']' 00:05:13.184 16:33:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2227360 00:05:13.184 16:33:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:13.184 16:33:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.184 16:33:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2227360 00:05:13.184 16:33:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.184 16:33:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.184 16:33:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2227360' 00:05:13.184 killing process with pid 2227360 00:05:13.184 16:33:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2227360 00:05:13.184 16:33:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2227360 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:13.446 00:05:13.446 real 0m6.588s 00:05:13.446 user 0m6.226s 00:05:13.446 sys 0m0.681s 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.446 ************************************ 00:05:13.446 END TEST skip_rpc_with_json 00:05:13.446 ************************************ 00:05:13.446 16:33:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:13.446 16:33:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.446 16:33:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.446 16:33:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.446 ************************************ 00:05:13.446 START TEST skip_rpc_with_delay 00:05:13.446 ************************************ 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:13.446 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.705 [2024-10-17 16:33:27.167445] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:13.705 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:13.705 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:13.705 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:13.705 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:13.705 00:05:13.705 real 0m0.072s 00:05:13.705 user 0m0.046s 00:05:13.705 sys 0m0.026s 00:05:13.705 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.705 16:33:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:13.705 ************************************ 00:05:13.705 END TEST skip_rpc_with_delay 00:05:13.705 ************************************ 00:05:13.705 16:33:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:13.705 16:33:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:13.705 16:33:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:13.705 16:33:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.705 16:33:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.705 16:33:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.705 ************************************ 00:05:13.705 START TEST exit_on_failed_rpc_init 00:05:13.705 ************************************ 00:05:13.705 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:13.705 16:33:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2228075 00:05:13.705 16:33:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.705 16:33:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2228075 00:05:13.705 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2228075 ']' 00:05:13.705 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.705 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.705 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.705 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.705 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.705 [2024-10-17 16:33:27.290259] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:13.705 [2024-10-17 16:33:27.290354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228075 ] 00:05:13.705 [2024-10-17 16:33:27.352554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.964 [2024-10-17 16:33:27.414834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:14.222 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.222 [2024-10-17 16:33:27.744217] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:14.222 [2024-10-17 16:33:27.744322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228207 ] 00:05:14.222 [2024-10-17 16:33:27.806183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.222 [2024-10-17 16:33:27.871265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.222 [2024-10-17 16:33:27.871407] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:14.222 [2024-10-17 16:33:27.871432] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:14.222 [2024-10-17 16:33:27.871447] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2228075 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2228075 ']' 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2228075 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2228075 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2228075' 00:05:14.480 killing process with pid 2228075 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2228075 00:05:14.480 16:33:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2228075 00:05:15.048 00:05:15.048 real 0m1.203s 00:05:15.048 user 0m1.316s 00:05:15.048 sys 0m0.436s 00:05:15.048 16:33:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.048 16:33:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.048 ************************************ 00:05:15.048 END TEST exit_on_failed_rpc_init 00:05:15.048 ************************************ 00:05:15.048 16:33:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:15.048 00:05:15.048 real 0m13.685s 00:05:15.048 user 0m12.913s 00:05:15.048 sys 0m1.677s 00:05:15.048 16:33:28 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.048 16:33:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.048 ************************************ 00:05:15.048 END TEST skip_rpc 00:05:15.048 ************************************ 00:05:15.048 16:33:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:15.048 16:33:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.048 16:33:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.048 16:33:28 -- common/autotest_common.sh@10 -- # set +x 00:05:15.048 ************************************ 00:05:15.048 START TEST rpc_client 00:05:15.048 ************************************ 00:05:15.048 16:33:28 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:15.048 * Looking for test storage... 00:05:15.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:15.048 16:33:28 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.048 16:33:28 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.048 16:33:28 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.048 16:33:28 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.048 16:33:28 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:15.048 16:33:28 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.048 16:33:28 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.048 --rc genhtml_branch_coverage=1 00:05:15.048 --rc genhtml_function_coverage=1 00:05:15.048 --rc genhtml_legend=1 00:05:15.048 --rc geninfo_all_blocks=1 00:05:15.048 --rc geninfo_unexecuted_blocks=1 00:05:15.048 00:05:15.048 ' 00:05:15.048 16:33:28 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.048 --rc genhtml_branch_coverage=1 00:05:15.048 --rc genhtml_function_coverage=1 00:05:15.048 --rc genhtml_legend=1 00:05:15.048 --rc geninfo_all_blocks=1 00:05:15.048 --rc geninfo_unexecuted_blocks=1 00:05:15.048 00:05:15.048 ' 00:05:15.048 16:33:28 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.048 --rc genhtml_branch_coverage=1 00:05:15.048 --rc genhtml_function_coverage=1 00:05:15.048 --rc genhtml_legend=1 00:05:15.048 --rc geninfo_all_blocks=1 00:05:15.048 --rc geninfo_unexecuted_blocks=1 00:05:15.048 00:05:15.048 ' 00:05:15.048 16:33:28 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.048 --rc genhtml_branch_coverage=1 00:05:15.048 --rc genhtml_function_coverage=1 00:05:15.048 --rc genhtml_legend=1 00:05:15.048 --rc geninfo_all_blocks=1 00:05:15.048 --rc geninfo_unexecuted_blocks=1 00:05:15.048 00:05:15.048 ' 00:05:15.048 16:33:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:15.048 OK 00:05:15.048 16:33:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:15.048 00:05:15.048 real 0m0.162s 00:05:15.048 user 0m0.111s 00:05:15.048 sys 0m0.060s 00:05:15.048 16:33:28 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.048 16:33:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:15.048 ************************************ 00:05:15.048 END TEST rpc_client 00:05:15.048 ************************************ 00:05:15.048 16:33:28 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.048 16:33:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.048 16:33:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.048 16:33:28 -- common/autotest_common.sh@10 -- # set +x 00:05:15.048 ************************************ 00:05:15.048 START TEST json_config 00:05:15.048 ************************************ 00:05:15.048 16:33:28 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.308 16:33:28 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.308 16:33:28 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.308 16:33:28 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.308 16:33:28 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.308 16:33:28 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.308 16:33:28 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.308 16:33:28 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.308 16:33:28 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.308 16:33:28 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.308 16:33:28 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.308 16:33:28 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.308 16:33:28 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.308 16:33:28 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.308 16:33:28 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.308 16:33:28 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.308 16:33:28 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:15.308 16:33:28 json_config -- scripts/common.sh@345 -- # : 1 00:05:15.308 16:33:28 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.308 16:33:28 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.308 16:33:28 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:15.308 16:33:28 json_config -- scripts/common.sh@353 -- # local d=1 00:05:15.308 16:33:28 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.308 16:33:28 json_config -- scripts/common.sh@355 -- # echo 1 00:05:15.308 16:33:28 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.308 16:33:28 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:15.308 16:33:28 json_config -- scripts/common.sh@353 -- # local d=2 00:05:15.308 16:33:28 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.308 16:33:28 json_config -- scripts/common.sh@355 -- # echo 2 00:05:15.308 16:33:28 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.308 16:33:28 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.308 16:33:28 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.308 16:33:28 json_config -- scripts/common.sh@368 -- # return 0 00:05:15.308 16:33:28 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.308 16:33:28 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.308 --rc genhtml_branch_coverage=1 00:05:15.308 --rc genhtml_function_coverage=1 00:05:15.308 --rc genhtml_legend=1 00:05:15.308 --rc geninfo_all_blocks=1 00:05:15.308 --rc geninfo_unexecuted_blocks=1 00:05:15.308 00:05:15.308 ' 00:05:15.308 16:33:28 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.308 --rc genhtml_branch_coverage=1 00:05:15.308 --rc genhtml_function_coverage=1 00:05:15.308 --rc genhtml_legend=1 00:05:15.308 --rc geninfo_all_blocks=1 00:05:15.308 --rc geninfo_unexecuted_blocks=1 00:05:15.308 00:05:15.308 ' 00:05:15.308 16:33:28 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.308 --rc genhtml_branch_coverage=1 00:05:15.308 --rc genhtml_function_coverage=1 00:05:15.308 --rc genhtml_legend=1 00:05:15.308 --rc geninfo_all_blocks=1 00:05:15.308 --rc geninfo_unexecuted_blocks=1 00:05:15.308 00:05:15.308 ' 00:05:15.308 16:33:28 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.308 --rc genhtml_branch_coverage=1 00:05:15.308 --rc genhtml_function_coverage=1 00:05:15.308 --rc genhtml_legend=1 00:05:15.308 --rc geninfo_all_blocks=1 00:05:15.308 --rc geninfo_unexecuted_blocks=1 00:05:15.308 00:05:15.308 ' 00:05:15.308 16:33:28 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.308 16:33:28 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.308 16:33:28 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.308 16:33:28 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.308 16:33:28 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.308 16:33:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.308 16:33:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.308 16:33:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.308 16:33:28 json_config -- paths/export.sh@5 -- # export PATH 00:05:15.308 16:33:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@51 -- # : 0 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.308 16:33:28 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.308 16:33:28 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:15.308 16:33:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:15.308 16:33:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:15.308 16:33:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:15.308 16:33:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:15.308 16:33:28 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:15.308 16:33:28 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:15.308 16:33:28 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:15.309 16:33:28 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:15.309 16:33:28 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:15.309 16:33:28 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:15.309 16:33:28 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:15.309 16:33:28 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:15.309 16:33:28 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:15.309 16:33:28 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.309 16:33:28 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:15.309 INFO: JSON configuration test init 00:05:15.309 16:33:28 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:15.309 16:33:28 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:15.309 16:33:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.309 16:33:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.309 16:33:28 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:15.309 16:33:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.309 16:33:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.309 16:33:28 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:15.309 16:33:28 json_config -- json_config/common.sh@9 -- # local app=target 00:05:15.309 16:33:28 json_config -- json_config/common.sh@10 -- # shift 00:05:15.309 16:33:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.309 16:33:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.309 16:33:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.309 16:33:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.309 16:33:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.309 16:33:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2228470 00:05:15.309 16:33:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:15.309 16:33:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.309 Waiting for target to run... 00:05:15.309 16:33:28 json_config -- json_config/common.sh@25 -- # waitforlisten 2228470 /var/tmp/spdk_tgt.sock 00:05:15.309 16:33:28 json_config -- common/autotest_common.sh@831 -- # '[' -z 2228470 ']' 00:05:15.309 16:33:28 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.309 16:33:28 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.309 16:33:28 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.309 16:33:28 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.309 16:33:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.309 [2024-10-17 16:33:28.925685] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:15.309 [2024-10-17 16:33:28.925786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228470 ] 00:05:15.877 [2024-10-17 16:33:29.464931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.877 [2024-10-17 16:33:29.523903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.443 16:33:29 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.443 16:33:29 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:16.443 16:33:29 json_config -- json_config/common.sh@26 -- # echo '' 00:05:16.443 00:05:16.443 16:33:29 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:16.443 16:33:29 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:16.443 16:33:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.443 16:33:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.443 16:33:29 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:16.443 16:33:29 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:16.443 16:33:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.443 16:33:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.443 16:33:29 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:16.443 16:33:29 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:16.443 16:33:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:19.728 16:33:33 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:19.728 16:33:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:19.728 16:33:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.728 16:33:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.728 16:33:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:19.728 16:33:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:19.728 16:33:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:19.728 16:33:33 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:19.728 16:33:33 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:19.728 16:33:33 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:19.728 16:33:33 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:19.728 16:33:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@54 -- # sort 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:19.986 16:33:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.986 16:33:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:19.986 16:33:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.986 16:33:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:19.986 16:33:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:19.986 16:33:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:20.244 MallocForNvmf0 00:05:20.244 16:33:33 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.244 16:33:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.502 MallocForNvmf1 00:05:20.502 16:33:33 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:20.502 16:33:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:20.760 [2024-10-17 16:33:34.242678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.760 16:33:34 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:20.760 16:33:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:21.018 16:33:34 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:21.018 16:33:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:21.275 16:33:34 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:21.275 16:33:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:21.533 16:33:35 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:21.533 16:33:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:21.791 [2024-10-17 16:33:35.310090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:21.791 16:33:35 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:21.791 16:33:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:21.791 16:33:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.791 16:33:35 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:21.791 16:33:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:21.791 16:33:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.791 16:33:35 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:21.791 16:33:35 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:21.791 16:33:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:22.049 MallocBdevForConfigChangeCheck 00:05:22.049 16:33:35 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:22.049 16:33:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.049 16:33:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.049 16:33:35 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:22.049 16:33:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.615 16:33:36 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:22.615 INFO: shutting down applications... 00:05:22.615 16:33:36 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:22.615 16:33:36 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:22.615 16:33:36 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:22.615 16:33:36 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:23.988 Calling clear_iscsi_subsystem 00:05:23.988 Calling clear_nvmf_subsystem 00:05:23.988 Calling clear_nbd_subsystem 00:05:23.988 Calling clear_ublk_subsystem 00:05:23.988 Calling clear_vhost_blk_subsystem 00:05:23.988 Calling clear_vhost_scsi_subsystem 00:05:23.988 Calling clear_bdev_subsystem 00:05:24.246 16:33:37 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:24.246 16:33:37 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:24.247 16:33:37 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:24.247 16:33:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.247 16:33:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:24.247 16:33:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:24.505 16:33:38 json_config -- json_config/json_config.sh@352 -- # break 00:05:24.505 16:33:38 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:24.505 16:33:38 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:24.505 16:33:38 json_config -- json_config/common.sh@31 -- # local app=target 00:05:24.505 16:33:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:24.505 16:33:38 json_config -- json_config/common.sh@35 -- # [[ -n 2228470 ]] 00:05:24.505 16:33:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2228470 00:05:24.505 16:33:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:24.505 16:33:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.505 16:33:38 json_config -- json_config/common.sh@41 -- # kill -0 2228470 00:05:24.505 16:33:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.073 16:33:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.073 16:33:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.073 16:33:38 json_config -- json_config/common.sh@41 -- # kill -0 2228470 00:05:25.073 16:33:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:25.073 16:33:38 json_config -- json_config/common.sh@43 -- # break 00:05:25.073 16:33:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:25.073 16:33:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:25.073 SPDK target shutdown done 00:05:25.073 16:33:38 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:25.073 INFO: relaunching applications... 00:05:25.073 16:33:38 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.073 16:33:38 json_config -- json_config/common.sh@9 -- # local app=target 00:05:25.073 16:33:38 json_config -- json_config/common.sh@10 -- # shift 00:05:25.073 16:33:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:25.073 16:33:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:25.073 16:33:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:25.073 16:33:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.073 16:33:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.073 16:33:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2229674 00:05:25.073 16:33:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.073 16:33:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:25.073 Waiting for target to run... 00:05:25.073 16:33:38 json_config -- json_config/common.sh@25 -- # waitforlisten 2229674 /var/tmp/spdk_tgt.sock 00:05:25.073 16:33:38 json_config -- common/autotest_common.sh@831 -- # '[' -z 2229674 ']' 00:05:25.073 16:33:38 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.073 16:33:38 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.073 16:33:38 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.074 16:33:38 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.074 16:33:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.074 [2024-10-17 16:33:38.655475] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:25.074 [2024-10-17 16:33:38.655574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229674 ] 00:05:25.642 [2024-10-17 16:33:39.192878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.642 [2024-10-17 16:33:39.251886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.926 [2024-10-17 16:33:42.317904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.926 [2024-10-17 16:33:42.350394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:28.926 16:33:42 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.926 16:33:42 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:28.926 16:33:42 json_config -- json_config/common.sh@26 -- # echo '' 00:05:28.926 00:05:28.926 16:33:42 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:28.926 16:33:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:28.926 INFO: Checking if target configuration is the same... 00:05:28.926 16:33:42 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.926 16:33:42 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:28.926 16:33:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.926 + '[' 2 -ne 2 ']' 00:05:28.926 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:28.926 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:28.926 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:28.926 +++ basename /dev/fd/62 00:05:28.926 ++ mktemp /tmp/62.XXX 00:05:28.926 + tmp_file_1=/tmp/62.rsH 00:05:28.926 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.926 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:28.926 + tmp_file_2=/tmp/spdk_tgt_config.json.PYi 00:05:28.926 + ret=0 00:05:28.926 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.184 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.184 + diff -u /tmp/62.rsH /tmp/spdk_tgt_config.json.PYi 00:05:29.184 + echo 'INFO: JSON config files are the same' 00:05:29.184 INFO: JSON config files are the same 00:05:29.184 + rm /tmp/62.rsH /tmp/spdk_tgt_config.json.PYi 00:05:29.184 + exit 0 00:05:29.184 16:33:42 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:29.184 16:33:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:29.184 INFO: changing configuration and checking if this can be detected... 00:05:29.184 16:33:42 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:29.184 16:33:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:29.442 16:33:43 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.442 16:33:43 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:29.442 16:33:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.442 + '[' 2 -ne 2 ']' 00:05:29.442 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:29.442 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:29.442 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:29.442 +++ basename /dev/fd/62 00:05:29.442 ++ mktemp /tmp/62.XXX 00:05:29.700 + tmp_file_1=/tmp/62.R2P 00:05:29.700 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.700 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:29.700 + tmp_file_2=/tmp/spdk_tgt_config.json.aEY 00:05:29.700 + ret=0 00:05:29.700 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.958 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.958 + diff -u /tmp/62.R2P /tmp/spdk_tgt_config.json.aEY 00:05:29.958 + ret=1 00:05:29.958 + echo '=== Start of file: /tmp/62.R2P ===' 00:05:29.958 + cat /tmp/62.R2P 00:05:29.958 + echo '=== End of file: /tmp/62.R2P ===' 00:05:29.958 + echo '' 00:05:29.958 + echo '=== Start of file: /tmp/spdk_tgt_config.json.aEY ===' 00:05:29.958 + cat /tmp/spdk_tgt_config.json.aEY 00:05:29.958 + echo '=== End of file: /tmp/spdk_tgt_config.json.aEY ===' 00:05:29.958 + echo '' 00:05:29.958 + rm /tmp/62.R2P /tmp/spdk_tgt_config.json.aEY 00:05:29.958 + exit 1 00:05:29.958 16:33:43 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:29.958 INFO: configuration change detected. 00:05:29.958 16:33:43 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:29.958 16:33:43 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:29.959 16:33:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.959 16:33:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.959 16:33:43 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:29.959 16:33:43 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:29.959 16:33:43 json_config -- json_config/json_config.sh@324 -- # [[ -n 2229674 ]] 00:05:29.959 16:33:43 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:29.959 16:33:43 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:29.959 16:33:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.959 16:33:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.959 16:33:43 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:29.959 16:33:43 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:29.959 16:33:43 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:29.959 16:33:43 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:29.959 16:33:43 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:29.959 16:33:43 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:29.959 16:33:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.959 16:33:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.959 16:33:43 json_config -- json_config/json_config.sh@330 -- # killprocess 2229674 00:05:29.959 16:33:43 json_config -- common/autotest_common.sh@950 -- # '[' -z 2229674 ']' 00:05:29.959 16:33:43 json_config -- common/autotest_common.sh@954 -- # kill -0 2229674 00:05:29.959 16:33:43 json_config -- common/autotest_common.sh@955 -- # uname 00:05:29.959 16:33:43 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.959 16:33:43 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2229674 00:05:30.217 16:33:43 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.217 16:33:43 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.217 16:33:43 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2229674' 00:05:30.217 killing process with pid 2229674 00:05:30.217 16:33:43 json_config -- common/autotest_common.sh@969 -- # kill 2229674 00:05:30.217 16:33:43 json_config -- common/autotest_common.sh@974 -- # wait 2229674 00:05:31.630 16:33:45 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:31.630 16:33:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:31.630 16:33:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:31.630 16:33:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.630 16:33:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:31.630 16:33:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:31.630 INFO: Success 00:05:31.630 00:05:31.630 real 0m16.546s 00:05:31.630 user 0m17.910s 00:05:31.630 sys 0m2.930s 00:05:31.630 16:33:45 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.630 16:33:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.630 ************************************ 00:05:31.630 END TEST json_config 00:05:31.630 ************************************ 00:05:31.630 16:33:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:31.630 16:33:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.630 16:33:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.630 16:33:45 -- common/autotest_common.sh@10 -- # set +x 00:05:31.630 ************************************ 00:05:31.630 START TEST json_config_extra_key 00:05:31.630 ************************************ 00:05:31.630 16:33:45 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.889 --rc genhtml_branch_coverage=1 00:05:31.889 --rc genhtml_function_coverage=1 00:05:31.889 --rc genhtml_legend=1 00:05:31.889 --rc geninfo_all_blocks=1 00:05:31.889 --rc geninfo_unexecuted_blocks=1 00:05:31.889 00:05:31.889 ' 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.889 --rc genhtml_branch_coverage=1 00:05:31.889 --rc genhtml_function_coverage=1 00:05:31.889 --rc genhtml_legend=1 00:05:31.889 --rc geninfo_all_blocks=1 00:05:31.889 --rc geninfo_unexecuted_blocks=1 00:05:31.889 00:05:31.889 ' 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.889 --rc genhtml_branch_coverage=1 00:05:31.889 --rc genhtml_function_coverage=1 00:05:31.889 --rc genhtml_legend=1 00:05:31.889 --rc geninfo_all_blocks=1 00:05:31.889 --rc geninfo_unexecuted_blocks=1 00:05:31.889 00:05:31.889 ' 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.889 --rc genhtml_branch_coverage=1 00:05:31.889 --rc genhtml_function_coverage=1 00:05:31.889 --rc genhtml_legend=1 00:05:31.889 --rc geninfo_all_blocks=1 00:05:31.889 --rc geninfo_unexecuted_blocks=1 00:05:31.889 00:05:31.889 ' 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.889 16:33:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.889 16:33:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.889 16:33:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.889 16:33:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.889 16:33:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:31.889 16:33:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.889 16:33:45 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:31.889 INFO: launching applications... 00:05:31.889 16:33:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:31.889 16:33:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:31.889 16:33:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:31.889 16:33:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.889 16:33:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.889 16:33:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.889 16:33:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.889 16:33:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.889 16:33:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2230595 00:05:31.889 16:33:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.889 Waiting for target to run... 00:05:31.889 16:33:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:31.889 16:33:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2230595 /var/tmp/spdk_tgt.sock 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2230595 ']' 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.889 16:33:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:31.889 [2024-10-17 16:33:45.513680] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:31.889 [2024-10-17 16:33:45.513787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230595 ] 00:05:32.454 [2024-10-17 16:33:45.871402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.454 [2024-10-17 16:33:45.920786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.020 16:33:46 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.020 16:33:46 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:33.020 16:33:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:33.020 00:05:33.020 16:33:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:33.020 INFO: shutting down applications... 00:05:33.020 16:33:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:33.020 16:33:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:33.020 16:33:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:33.020 16:33:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2230595 ]] 00:05:33.020 16:33:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2230595 00:05:33.020 16:33:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:33.020 16:33:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.020 16:33:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2230595 00:05:33.020 16:33:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.587 16:33:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.587 16:33:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.587 16:33:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2230595 00:05:33.587 16:33:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:33.587 16:33:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:33.587 16:33:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:33.587 16:33:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:33.587 SPDK target shutdown done 00:05:33.587 16:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:33.587 Success 00:05:33.587 00:05:33.587 real 0m1.695s 00:05:33.587 user 0m1.709s 00:05:33.587 sys 0m0.468s 00:05:33.587 16:33:47 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.587 16:33:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:33.587 ************************************ 00:05:33.587 END TEST json_config_extra_key 00:05:33.587 ************************************ 00:05:33.587 16:33:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.587 16:33:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.587 16:33:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.587 16:33:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.587 ************************************ 00:05:33.587 START TEST alias_rpc 00:05:33.587 ************************************ 00:05:33.587 16:33:47 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.587 * Looking for test storage... 00:05:33.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:33.587 16:33:47 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.587 16:33:47 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.587 16:33:47 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:33.587 16:33:47 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:33.587 16:33:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.587 16:33:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.587 16:33:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.587 16:33:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.587 16:33:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.588 16:33:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:33.588 16:33:47 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.588 16:33:47 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.588 --rc genhtml_branch_coverage=1 00:05:33.588 --rc genhtml_function_coverage=1 00:05:33.588 --rc genhtml_legend=1 00:05:33.588 --rc geninfo_all_blocks=1 00:05:33.588 --rc geninfo_unexecuted_blocks=1 00:05:33.588 00:05:33.588 ' 00:05:33.588 16:33:47 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.588 --rc genhtml_branch_coverage=1 00:05:33.588 --rc genhtml_function_coverage=1 00:05:33.588 --rc genhtml_legend=1 00:05:33.588 --rc geninfo_all_blocks=1 00:05:33.588 --rc geninfo_unexecuted_blocks=1 00:05:33.588 00:05:33.588 ' 00:05:33.588 16:33:47 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.588 --rc genhtml_branch_coverage=1 00:05:33.588 --rc genhtml_function_coverage=1 00:05:33.588 --rc genhtml_legend=1 00:05:33.588 --rc geninfo_all_blocks=1 00:05:33.588 --rc geninfo_unexecuted_blocks=1 00:05:33.588 00:05:33.588 ' 00:05:33.588 16:33:47 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.588 --rc genhtml_branch_coverage=1 00:05:33.588 --rc genhtml_function_coverage=1 00:05:33.588 --rc genhtml_legend=1 00:05:33.588 --rc geninfo_all_blocks=1 00:05:33.588 --rc geninfo_unexecuted_blocks=1 00:05:33.588 00:05:33.588 ' 00:05:33.588 16:33:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:33.588 16:33:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2230907 00:05:33.588 16:33:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.588 16:33:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2230907 00:05:33.588 16:33:47 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2230907 ']' 00:05:33.588 16:33:47 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.588 16:33:47 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.588 16:33:47 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.588 16:33:47 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.588 16:33:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.588 [2024-10-17 16:33:47.261522] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:33.588 [2024-10-17 16:33:47.261616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230907 ] 00:05:33.847 [2024-10-17 16:33:47.320910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.847 [2024-10-17 16:33:47.381352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.105 16:33:47 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.105 16:33:47 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:34.105 16:33:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:34.363 16:33:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2230907 00:05:34.363 16:33:47 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2230907 ']' 00:05:34.363 16:33:47 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2230907 00:05:34.363 16:33:47 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:34.363 16:33:47 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.363 16:33:47 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2230907 00:05:34.363 16:33:47 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.363 16:33:47 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.363 16:33:47 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2230907' 00:05:34.363 killing process with pid 2230907 00:05:34.363 16:33:47 alias_rpc -- common/autotest_common.sh@969 -- # kill 2230907 00:05:34.363 16:33:47 alias_rpc -- common/autotest_common.sh@974 -- # wait 2230907 00:05:34.929 00:05:34.929 real 0m1.370s 00:05:34.929 user 0m1.483s 00:05:34.929 sys 0m0.444s 00:05:34.929 16:33:48 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.929 16:33:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.929 ************************************ 00:05:34.929 END TEST alias_rpc 00:05:34.929 ************************************ 00:05:34.929 16:33:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:34.929 16:33:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.929 16:33:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.929 16:33:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.929 16:33:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.929 ************************************ 00:05:34.929 START TEST spdkcli_tcp 00:05:34.929 ************************************ 00:05:34.929 16:33:48 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.929 * Looking for test storage... 00:05:34.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:34.929 16:33:48 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:34.929 16:33:48 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:34.929 16:33:48 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:34.929 16:33:48 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:34.929 16:33:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.929 16:33:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.929 16:33:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.929 16:33:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.930 16:33:48 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:34.930 16:33:48 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.930 16:33:48 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:34.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.930 --rc genhtml_branch_coverage=1 00:05:34.930 --rc genhtml_function_coverage=1 00:05:34.930 --rc genhtml_legend=1 00:05:34.930 --rc geninfo_all_blocks=1 00:05:34.930 --rc geninfo_unexecuted_blocks=1 00:05:34.930 00:05:34.930 ' 00:05:34.930 16:33:48 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:34.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.930 --rc genhtml_branch_coverage=1 00:05:34.930 --rc genhtml_function_coverage=1 00:05:34.930 --rc genhtml_legend=1 00:05:34.930 --rc geninfo_all_blocks=1 00:05:34.930 --rc geninfo_unexecuted_blocks=1 00:05:34.930 00:05:34.930 ' 00:05:35.189 16:33:48 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.189 --rc genhtml_branch_coverage=1 00:05:35.189 --rc genhtml_function_coverage=1 00:05:35.189 --rc genhtml_legend=1 00:05:35.189 --rc geninfo_all_blocks=1 00:05:35.189 --rc geninfo_unexecuted_blocks=1 00:05:35.189 00:05:35.189 ' 00:05:35.189 16:33:48 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.189 --rc genhtml_branch_coverage=1 00:05:35.189 --rc genhtml_function_coverage=1 00:05:35.189 --rc genhtml_legend=1 00:05:35.189 --rc geninfo_all_blocks=1 00:05:35.189 --rc geninfo_unexecuted_blocks=1 00:05:35.189 00:05:35.189 ' 00:05:35.189 16:33:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:35.189 16:33:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:35.189 16:33:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:35.189 16:33:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:35.189 16:33:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:35.189 16:33:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:35.189 16:33:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:35.189 16:33:48 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.189 16:33:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.189 16:33:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2231111 00:05:35.189 16:33:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:35.189 16:33:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2231111 00:05:35.189 16:33:48 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2231111 ']' 00:05:35.189 16:33:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.189 16:33:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.189 16:33:48 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.189 16:33:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.189 16:33:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.189 [2024-10-17 16:33:48.676852] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:35.189 [2024-10-17 16:33:48.676938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231111 ] 00:05:35.189 [2024-10-17 16:33:48.733193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.189 [2024-10-17 16:33:48.794036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.189 [2024-10-17 16:33:48.794042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.447 16:33:49 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.447 16:33:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:35.447 16:33:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2231232 00:05:35.447 16:33:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:35.447 16:33:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:35.705 [ 00:05:35.705 "bdev_malloc_delete", 00:05:35.705 "bdev_malloc_create", 00:05:35.705 "bdev_null_resize", 00:05:35.706 "bdev_null_delete", 00:05:35.706 "bdev_null_create", 00:05:35.706 "bdev_nvme_cuse_unregister", 00:05:35.706 "bdev_nvme_cuse_register", 00:05:35.706 "bdev_opal_new_user", 00:05:35.706 "bdev_opal_set_lock_state", 00:05:35.706 "bdev_opal_delete", 00:05:35.706 "bdev_opal_get_info", 00:05:35.706 "bdev_opal_create", 00:05:35.706 "bdev_nvme_opal_revert", 00:05:35.706 "bdev_nvme_opal_init", 00:05:35.706 "bdev_nvme_send_cmd", 00:05:35.706 "bdev_nvme_set_keys", 00:05:35.706 "bdev_nvme_get_path_iostat", 00:05:35.706 "bdev_nvme_get_mdns_discovery_info", 00:05:35.706 "bdev_nvme_stop_mdns_discovery", 00:05:35.706 "bdev_nvme_start_mdns_discovery", 00:05:35.706 "bdev_nvme_set_multipath_policy", 00:05:35.706 "bdev_nvme_set_preferred_path", 00:05:35.706 "bdev_nvme_get_io_paths", 00:05:35.706 "bdev_nvme_remove_error_injection", 00:05:35.706 "bdev_nvme_add_error_injection", 00:05:35.706 "bdev_nvme_get_discovery_info", 00:05:35.706 "bdev_nvme_stop_discovery", 00:05:35.706 "bdev_nvme_start_discovery", 00:05:35.706 "bdev_nvme_get_controller_health_info", 00:05:35.706 "bdev_nvme_disable_controller", 00:05:35.706 "bdev_nvme_enable_controller", 00:05:35.706 "bdev_nvme_reset_controller", 00:05:35.706 "bdev_nvme_get_transport_statistics", 00:05:35.706 "bdev_nvme_apply_firmware", 00:05:35.706 "bdev_nvme_detach_controller", 00:05:35.706 "bdev_nvme_get_controllers", 00:05:35.706 "bdev_nvme_attach_controller", 00:05:35.706 "bdev_nvme_set_hotplug", 00:05:35.706 "bdev_nvme_set_options", 00:05:35.706 "bdev_passthru_delete", 00:05:35.706 "bdev_passthru_create", 00:05:35.706 "bdev_lvol_set_parent_bdev", 00:05:35.706 "bdev_lvol_set_parent", 00:05:35.706 "bdev_lvol_check_shallow_copy", 00:05:35.706 "bdev_lvol_start_shallow_copy", 00:05:35.706 "bdev_lvol_grow_lvstore", 00:05:35.706 "bdev_lvol_get_lvols", 00:05:35.706 "bdev_lvol_get_lvstores", 00:05:35.706 "bdev_lvol_delete", 00:05:35.706 "bdev_lvol_set_read_only", 00:05:35.706 "bdev_lvol_resize", 00:05:35.706 "bdev_lvol_decouple_parent", 00:05:35.706 "bdev_lvol_inflate", 00:05:35.706 "bdev_lvol_rename", 00:05:35.706 "bdev_lvol_clone_bdev", 00:05:35.706 "bdev_lvol_clone", 00:05:35.706 "bdev_lvol_snapshot", 00:05:35.706 "bdev_lvol_create", 00:05:35.706 "bdev_lvol_delete_lvstore", 00:05:35.706 "bdev_lvol_rename_lvstore", 00:05:35.706 "bdev_lvol_create_lvstore", 00:05:35.706 "bdev_raid_set_options", 00:05:35.706 "bdev_raid_remove_base_bdev", 00:05:35.706 "bdev_raid_add_base_bdev", 00:05:35.706 "bdev_raid_delete", 00:05:35.706 "bdev_raid_create", 00:05:35.706 "bdev_raid_get_bdevs", 00:05:35.706 "bdev_error_inject_error", 00:05:35.706 "bdev_error_delete", 00:05:35.706 "bdev_error_create", 00:05:35.706 "bdev_split_delete", 00:05:35.706 "bdev_split_create", 00:05:35.706 "bdev_delay_delete", 00:05:35.706 "bdev_delay_create", 00:05:35.706 "bdev_delay_update_latency", 00:05:35.706 "bdev_zone_block_delete", 00:05:35.706 "bdev_zone_block_create", 00:05:35.706 "blobfs_create", 00:05:35.706 "blobfs_detect", 00:05:35.706 "blobfs_set_cache_size", 00:05:35.706 "bdev_aio_delete", 00:05:35.706 "bdev_aio_rescan", 00:05:35.706 "bdev_aio_create", 00:05:35.706 "bdev_ftl_set_property", 00:05:35.706 "bdev_ftl_get_properties", 00:05:35.706 "bdev_ftl_get_stats", 00:05:35.706 "bdev_ftl_unmap", 00:05:35.706 "bdev_ftl_unload", 00:05:35.706 "bdev_ftl_delete", 00:05:35.706 "bdev_ftl_load", 00:05:35.706 "bdev_ftl_create", 00:05:35.706 "bdev_virtio_attach_controller", 00:05:35.706 "bdev_virtio_scsi_get_devices", 00:05:35.706 "bdev_virtio_detach_controller", 00:05:35.706 "bdev_virtio_blk_set_hotplug", 00:05:35.706 "bdev_iscsi_delete", 00:05:35.706 "bdev_iscsi_create", 00:05:35.706 "bdev_iscsi_set_options", 00:05:35.706 "accel_error_inject_error", 00:05:35.706 "ioat_scan_accel_module", 00:05:35.706 "dsa_scan_accel_module", 00:05:35.706 "iaa_scan_accel_module", 00:05:35.706 "vfu_virtio_create_fs_endpoint", 00:05:35.706 "vfu_virtio_create_scsi_endpoint", 00:05:35.706 "vfu_virtio_scsi_remove_target", 00:05:35.706 "vfu_virtio_scsi_add_target", 00:05:35.706 "vfu_virtio_create_blk_endpoint", 00:05:35.706 "vfu_virtio_delete_endpoint", 00:05:35.706 "keyring_file_remove_key", 00:05:35.706 "keyring_file_add_key", 00:05:35.706 "keyring_linux_set_options", 00:05:35.706 "fsdev_aio_delete", 00:05:35.706 "fsdev_aio_create", 00:05:35.706 "iscsi_get_histogram", 00:05:35.706 "iscsi_enable_histogram", 00:05:35.706 "iscsi_set_options", 00:05:35.706 "iscsi_get_auth_groups", 00:05:35.706 "iscsi_auth_group_remove_secret", 00:05:35.706 "iscsi_auth_group_add_secret", 00:05:35.706 "iscsi_delete_auth_group", 00:05:35.706 "iscsi_create_auth_group", 00:05:35.706 "iscsi_set_discovery_auth", 00:05:35.706 "iscsi_get_options", 00:05:35.706 "iscsi_target_node_request_logout", 00:05:35.706 "iscsi_target_node_set_redirect", 00:05:35.706 "iscsi_target_node_set_auth", 00:05:35.706 "iscsi_target_node_add_lun", 00:05:35.706 "iscsi_get_stats", 00:05:35.706 "iscsi_get_connections", 00:05:35.706 "iscsi_portal_group_set_auth", 00:05:35.706 "iscsi_start_portal_group", 00:05:35.706 "iscsi_delete_portal_group", 00:05:35.706 "iscsi_create_portal_group", 00:05:35.706 "iscsi_get_portal_groups", 00:05:35.706 "iscsi_delete_target_node", 00:05:35.706 "iscsi_target_node_remove_pg_ig_maps", 00:05:35.706 "iscsi_target_node_add_pg_ig_maps", 00:05:35.706 "iscsi_create_target_node", 00:05:35.706 "iscsi_get_target_nodes", 00:05:35.706 "iscsi_delete_initiator_group", 00:05:35.706 "iscsi_initiator_group_remove_initiators", 00:05:35.706 "iscsi_initiator_group_add_initiators", 00:05:35.706 "iscsi_create_initiator_group", 00:05:35.706 "iscsi_get_initiator_groups", 00:05:35.706 "nvmf_set_crdt", 00:05:35.706 "nvmf_set_config", 00:05:35.707 "nvmf_set_max_subsystems", 00:05:35.707 "nvmf_stop_mdns_prr", 00:05:35.707 "nvmf_publish_mdns_prr", 00:05:35.707 "nvmf_subsystem_get_listeners", 00:05:35.707 "nvmf_subsystem_get_qpairs", 00:05:35.707 "nvmf_subsystem_get_controllers", 00:05:35.707 "nvmf_get_stats", 00:05:35.707 "nvmf_get_transports", 00:05:35.707 "nvmf_create_transport", 00:05:35.707 "nvmf_get_targets", 00:05:35.707 "nvmf_delete_target", 00:05:35.707 "nvmf_create_target", 00:05:35.707 "nvmf_subsystem_allow_any_host", 00:05:35.707 "nvmf_subsystem_set_keys", 00:05:35.707 "nvmf_subsystem_remove_host", 00:05:35.707 "nvmf_subsystem_add_host", 00:05:35.707 "nvmf_ns_remove_host", 00:05:35.707 "nvmf_ns_add_host", 00:05:35.707 "nvmf_subsystem_remove_ns", 00:05:35.707 "nvmf_subsystem_set_ns_ana_group", 00:05:35.707 "nvmf_subsystem_add_ns", 00:05:35.707 "nvmf_subsystem_listener_set_ana_state", 00:05:35.707 "nvmf_discovery_get_referrals", 00:05:35.707 "nvmf_discovery_remove_referral", 00:05:35.707 "nvmf_discovery_add_referral", 00:05:35.707 "nvmf_subsystem_remove_listener", 00:05:35.707 "nvmf_subsystem_add_listener", 00:05:35.707 "nvmf_delete_subsystem", 00:05:35.707 "nvmf_create_subsystem", 00:05:35.707 "nvmf_get_subsystems", 00:05:35.707 "env_dpdk_get_mem_stats", 00:05:35.707 "nbd_get_disks", 00:05:35.707 "nbd_stop_disk", 00:05:35.707 "nbd_start_disk", 00:05:35.707 "ublk_recover_disk", 00:05:35.707 "ublk_get_disks", 00:05:35.707 "ublk_stop_disk", 00:05:35.707 "ublk_start_disk", 00:05:35.707 "ublk_destroy_target", 00:05:35.707 "ublk_create_target", 00:05:35.707 "virtio_blk_create_transport", 00:05:35.707 "virtio_blk_get_transports", 00:05:35.707 "vhost_controller_set_coalescing", 00:05:35.707 "vhost_get_controllers", 00:05:35.707 "vhost_delete_controller", 00:05:35.707 "vhost_create_blk_controller", 00:05:35.707 "vhost_scsi_controller_remove_target", 00:05:35.707 "vhost_scsi_controller_add_target", 00:05:35.707 "vhost_start_scsi_controller", 00:05:35.707 "vhost_create_scsi_controller", 00:05:35.707 "thread_set_cpumask", 00:05:35.707 "scheduler_set_options", 00:05:35.707 "framework_get_governor", 00:05:35.707 "framework_get_scheduler", 00:05:35.707 "framework_set_scheduler", 00:05:35.707 "framework_get_reactors", 00:05:35.707 "thread_get_io_channels", 00:05:35.707 "thread_get_pollers", 00:05:35.707 "thread_get_stats", 00:05:35.707 "framework_monitor_context_switch", 00:05:35.707 "spdk_kill_instance", 00:05:35.707 "log_enable_timestamps", 00:05:35.707 "log_get_flags", 00:05:35.707 "log_clear_flag", 00:05:35.707 "log_set_flag", 00:05:35.707 "log_get_level", 00:05:35.707 "log_set_level", 00:05:35.707 "log_get_print_level", 00:05:35.707 "log_set_print_level", 00:05:35.707 "framework_enable_cpumask_locks", 00:05:35.707 "framework_disable_cpumask_locks", 00:05:35.707 "framework_wait_init", 00:05:35.707 "framework_start_init", 00:05:35.707 "scsi_get_devices", 00:05:35.707 "bdev_get_histogram", 00:05:35.707 "bdev_enable_histogram", 00:05:35.707 "bdev_set_qos_limit", 00:05:35.707 "bdev_set_qd_sampling_period", 00:05:35.707 "bdev_get_bdevs", 00:05:35.707 "bdev_reset_iostat", 00:05:35.707 "bdev_get_iostat", 00:05:35.707 "bdev_examine", 00:05:35.707 "bdev_wait_for_examine", 00:05:35.707 "bdev_set_options", 00:05:35.707 "accel_get_stats", 00:05:35.707 "accel_set_options", 00:05:35.707 "accel_set_driver", 00:05:35.707 "accel_crypto_key_destroy", 00:05:35.707 "accel_crypto_keys_get", 00:05:35.707 "accel_crypto_key_create", 00:05:35.707 "accel_assign_opc", 00:05:35.707 "accel_get_module_info", 00:05:35.707 "accel_get_opc_assignments", 00:05:35.707 "vmd_rescan", 00:05:35.707 "vmd_remove_device", 00:05:35.707 "vmd_enable", 00:05:35.707 "sock_get_default_impl", 00:05:35.707 "sock_set_default_impl", 00:05:35.707 "sock_impl_set_options", 00:05:35.707 "sock_impl_get_options", 00:05:35.707 "iobuf_get_stats", 00:05:35.707 "iobuf_set_options", 00:05:35.707 "keyring_get_keys", 00:05:35.707 "vfu_tgt_set_base_path", 00:05:35.707 "framework_get_pci_devices", 00:05:35.707 "framework_get_config", 00:05:35.707 "framework_get_subsystems", 00:05:35.707 "fsdev_set_opts", 00:05:35.707 "fsdev_get_opts", 00:05:35.707 "trace_get_info", 00:05:35.707 "trace_get_tpoint_group_mask", 00:05:35.707 "trace_disable_tpoint_group", 00:05:35.707 "trace_enable_tpoint_group", 00:05:35.707 "trace_clear_tpoint_mask", 00:05:35.707 "trace_set_tpoint_mask", 00:05:35.707 "notify_get_notifications", 00:05:35.707 "notify_get_types", 00:05:35.707 "spdk_get_version", 00:05:35.707 "rpc_get_methods" 00:05:35.707 ] 00:05:35.707 16:33:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:35.707 16:33:49 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:35.707 16:33:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.707 16:33:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:35.707 16:33:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2231111 00:05:35.707 16:33:49 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2231111 ']' 00:05:35.707 16:33:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2231111 00:05:35.707 16:33:49 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:35.707 16:33:49 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.707 16:33:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2231111 00:05:35.965 16:33:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.965 16:33:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.965 16:33:49 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2231111' 00:05:35.965 killing process with pid 2231111 00:05:35.965 16:33:49 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2231111 00:05:35.965 16:33:49 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2231111 00:05:36.225 00:05:36.225 real 0m1.351s 00:05:36.225 user 0m2.416s 00:05:36.225 sys 0m0.454s 00:05:36.225 16:33:49 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.225 16:33:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.225 ************************************ 00:05:36.225 END TEST spdkcli_tcp 00:05:36.225 ************************************ 00:05:36.225 16:33:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.225 16:33:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.225 16:33:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.225 16:33:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.225 ************************************ 00:05:36.225 START TEST dpdk_mem_utility 00:05:36.225 ************************************ 00:05:36.225 16:33:49 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.484 * Looking for test storage... 00:05:36.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:36.484 16:33:49 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:36.484 16:33:49 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:36.484 16:33:49 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:36.484 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.484 16:33:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:36.484 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.484 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:36.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.484 --rc genhtml_branch_coverage=1 00:05:36.484 --rc genhtml_function_coverage=1 00:05:36.484 --rc genhtml_legend=1 00:05:36.484 --rc geninfo_all_blocks=1 00:05:36.484 --rc geninfo_unexecuted_blocks=1 00:05:36.484 00:05:36.484 ' 00:05:36.484 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:36.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.484 --rc genhtml_branch_coverage=1 00:05:36.484 --rc genhtml_function_coverage=1 00:05:36.484 --rc genhtml_legend=1 00:05:36.484 --rc geninfo_all_blocks=1 00:05:36.484 --rc geninfo_unexecuted_blocks=1 00:05:36.484 00:05:36.484 ' 00:05:36.484 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:36.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.484 --rc genhtml_branch_coverage=1 00:05:36.484 --rc genhtml_function_coverage=1 00:05:36.484 --rc genhtml_legend=1 00:05:36.484 --rc geninfo_all_blocks=1 00:05:36.484 --rc geninfo_unexecuted_blocks=1 00:05:36.484 00:05:36.484 ' 00:05:36.484 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:36.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.484 --rc genhtml_branch_coverage=1 00:05:36.484 --rc genhtml_function_coverage=1 00:05:36.484 --rc genhtml_legend=1 00:05:36.484 --rc geninfo_all_blocks=1 00:05:36.484 --rc geninfo_unexecuted_blocks=1 00:05:36.484 00:05:36.484 ' 00:05:36.484 16:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:36.484 16:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2231362 00:05:36.484 16:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.484 16:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2231362 00:05:36.484 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2231362 ']' 00:05:36.484 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.484 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.484 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.484 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.484 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.484 [2024-10-17 16:33:50.086158] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:36.484 [2024-10-17 16:33:50.086254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231362 ] 00:05:36.484 [2024-10-17 16:33:50.145395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.743 [2024-10-17 16:33:50.207551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.001 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.001 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:37.001 16:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:37.001 16:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:37.001 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.001 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.001 { 00:05:37.001 "filename": "/tmp/spdk_mem_dump.txt" 00:05:37.001 } 00:05:37.001 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.001 16:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:37.001 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:37.001 1 heaps totaling size 810.000000 MiB 00:05:37.001 size: 810.000000 MiB heap id: 0 00:05:37.001 end heaps---------- 00:05:37.001 9 mempools totaling size 595.772034 MiB 00:05:37.001 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:37.001 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:37.001 size: 92.545471 MiB name: bdev_io_2231362 00:05:37.001 size: 50.003479 MiB name: msgpool_2231362 00:05:37.001 size: 36.509338 MiB name: fsdev_io_2231362 00:05:37.001 size: 21.763794 MiB name: PDU_Pool 00:05:37.001 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:37.001 size: 4.133484 MiB name: evtpool_2231362 00:05:37.001 size: 0.026123 MiB name: Session_Pool 00:05:37.001 end mempools------- 00:05:37.001 6 memzones totaling size 4.142822 MiB 00:05:37.001 size: 1.000366 MiB name: RG_ring_0_2231362 00:05:37.001 size: 1.000366 MiB name: RG_ring_1_2231362 00:05:37.001 size: 1.000366 MiB name: RG_ring_4_2231362 00:05:37.001 size: 1.000366 MiB name: RG_ring_5_2231362 00:05:37.001 size: 0.125366 MiB name: RG_ring_2_2231362 00:05:37.001 size: 0.015991 MiB name: RG_ring_3_2231362 00:05:37.001 end memzones------- 00:05:37.001 16:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:37.001 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:37.001 list of free elements. size: 10.862488 MiB 00:05:37.001 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:37.001 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:37.001 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:37.001 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:37.001 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:37.001 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:37.001 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:37.001 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:37.001 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:37.001 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:37.001 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:37.001 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:37.001 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:37.001 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:37.001 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:37.001 list of standard malloc elements. size: 199.218628 MiB 00:05:37.001 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:37.001 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:37.001 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:37.001 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:37.001 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:37.001 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:37.001 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:37.002 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:37.002 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:37.002 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:37.002 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:37.002 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:37.002 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:37.002 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:37.002 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:37.002 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:37.002 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:37.002 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:37.002 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:37.002 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:37.002 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:37.002 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:37.002 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:37.002 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:37.002 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:37.002 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:37.002 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:37.002 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:37.002 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:37.002 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:37.002 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:37.002 list of memzone associated elements. size: 599.918884 MiB 00:05:37.002 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:37.002 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:37.002 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:37.002 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:37.002 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:37.002 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2231362_0 00:05:37.002 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:37.002 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2231362_0 00:05:37.002 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:37.002 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2231362_0 00:05:37.002 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:37.002 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:37.002 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:37.002 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:37.002 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:37.002 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2231362_0 00:05:37.002 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:37.002 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2231362 00:05:37.002 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:37.002 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2231362 00:05:37.002 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:37.002 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:37.002 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:37.002 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:37.002 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:37.002 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:37.002 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:37.002 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:37.002 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:37.002 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2231362 00:05:37.002 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:37.002 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2231362 00:05:37.002 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:37.002 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2231362 00:05:37.002 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:37.002 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2231362 00:05:37.002 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:37.002 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2231362 00:05:37.002 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:37.002 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2231362 00:05:37.002 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:37.002 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:37.002 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:37.002 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:37.002 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:37.002 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:37.002 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:37.002 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2231362 00:05:37.002 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:37.002 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2231362 00:05:37.002 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:37.002 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:37.002 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:37.002 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:37.002 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:37.002 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2231362 00:05:37.002 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:37.002 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:37.002 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:37.002 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2231362 00:05:37.002 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:37.002 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2231362 00:05:37.002 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:37.002 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2231362 00:05:37.002 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:37.002 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:37.002 16:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:37.002 16:33:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2231362 00:05:37.002 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2231362 ']' 00:05:37.002 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2231362 00:05:37.002 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:37.002 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.002 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2231362 00:05:37.002 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.002 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.002 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2231362' 00:05:37.002 killing process with pid 2231362 00:05:37.002 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2231362 00:05:37.002 16:33:50 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2231362 00:05:37.572 00:05:37.572 real 0m1.215s 00:05:37.572 user 0m1.202s 00:05:37.572 sys 0m0.452s 00:05:37.572 16:33:51 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.572 16:33:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.572 ************************************ 00:05:37.572 END TEST dpdk_mem_utility 00:05:37.572 ************************************ 00:05:37.572 16:33:51 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:37.572 16:33:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.572 16:33:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.572 16:33:51 -- common/autotest_common.sh@10 -- # set +x 00:05:37.572 ************************************ 00:05:37.572 START TEST event 00:05:37.572 ************************************ 00:05:37.572 16:33:51 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:37.572 * Looking for test storage... 00:05:37.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:37.572 16:33:51 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:37.572 16:33:51 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:37.572 16:33:51 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:37.830 16:33:51 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:37.830 16:33:51 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.830 16:33:51 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.830 16:33:51 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.830 16:33:51 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.830 16:33:51 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.830 16:33:51 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.830 16:33:51 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.830 16:33:51 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.830 16:33:51 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.830 16:33:51 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.830 16:33:51 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.830 16:33:51 event -- scripts/common.sh@344 -- # case "$op" in 00:05:37.830 16:33:51 event -- scripts/common.sh@345 -- # : 1 00:05:37.830 16:33:51 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.830 16:33:51 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.830 16:33:51 event -- scripts/common.sh@365 -- # decimal 1 00:05:37.830 16:33:51 event -- scripts/common.sh@353 -- # local d=1 00:05:37.830 16:33:51 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.830 16:33:51 event -- scripts/common.sh@355 -- # echo 1 00:05:37.830 16:33:51 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.830 16:33:51 event -- scripts/common.sh@366 -- # decimal 2 00:05:37.830 16:33:51 event -- scripts/common.sh@353 -- # local d=2 00:05:37.830 16:33:51 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.830 16:33:51 event -- scripts/common.sh@355 -- # echo 2 00:05:37.830 16:33:51 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.830 16:33:51 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.830 16:33:51 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.830 16:33:51 event -- scripts/common.sh@368 -- # return 0 00:05:37.830 16:33:51 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.830 16:33:51 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:37.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.830 --rc genhtml_branch_coverage=1 00:05:37.830 --rc genhtml_function_coverage=1 00:05:37.830 --rc genhtml_legend=1 00:05:37.830 --rc geninfo_all_blocks=1 00:05:37.830 --rc geninfo_unexecuted_blocks=1 00:05:37.831 00:05:37.831 ' 00:05:37.831 16:33:51 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:37.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.831 --rc genhtml_branch_coverage=1 00:05:37.831 --rc genhtml_function_coverage=1 00:05:37.831 --rc genhtml_legend=1 00:05:37.831 --rc geninfo_all_blocks=1 00:05:37.831 --rc geninfo_unexecuted_blocks=1 00:05:37.831 00:05:37.831 ' 00:05:37.831 16:33:51 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:37.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.831 --rc genhtml_branch_coverage=1 00:05:37.831 --rc genhtml_function_coverage=1 00:05:37.831 --rc genhtml_legend=1 00:05:37.831 --rc geninfo_all_blocks=1 00:05:37.831 --rc geninfo_unexecuted_blocks=1 00:05:37.831 00:05:37.831 ' 00:05:37.831 16:33:51 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:37.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.831 --rc genhtml_branch_coverage=1 00:05:37.831 --rc genhtml_function_coverage=1 00:05:37.831 --rc genhtml_legend=1 00:05:37.831 --rc geninfo_all_blocks=1 00:05:37.831 --rc geninfo_unexecuted_blocks=1 00:05:37.831 00:05:37.831 ' 00:05:37.831 16:33:51 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:37.831 16:33:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:37.831 16:33:51 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:37.831 16:33:51 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:37.831 16:33:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.831 16:33:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.831 ************************************ 00:05:37.831 START TEST event_perf 00:05:37.831 ************************************ 00:05:37.831 16:33:51 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:37.831 Running I/O for 1 seconds...[2024-10-17 16:33:51.322269] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:37.831 [2024-10-17 16:33:51.322338] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231633 ] 00:05:37.831 [2024-10-17 16:33:51.379378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.831 [2024-10-17 16:33:51.442624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.831 [2024-10-17 16:33:51.442689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.831 [2024-10-17 16:33:51.442756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.831 [2024-10-17 16:33:51.442759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.204 Running I/O for 1 seconds... 00:05:39.204 lcore 0: 231021 00:05:39.204 lcore 1: 231021 00:05:39.204 lcore 2: 231020 00:05:39.204 lcore 3: 231019 00:05:39.204 done. 00:05:39.204 00:05:39.204 real 0m1.203s 00:05:39.204 user 0m4.132s 00:05:39.204 sys 0m0.067s 00:05:39.204 16:33:52 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.204 16:33:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.204 ************************************ 00:05:39.204 END TEST event_perf 00:05:39.204 ************************************ 00:05:39.204 16:33:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:39.204 16:33:52 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:39.204 16:33:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.204 16:33:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.204 ************************************ 00:05:39.204 START TEST event_reactor 00:05:39.204 ************************************ 00:05:39.204 16:33:52 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:39.204 [2024-10-17 16:33:52.575410] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:39.204 [2024-10-17 16:33:52.575475] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231790 ] 00:05:39.204 [2024-10-17 16:33:52.637665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.204 [2024-10-17 16:33:52.702584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.138 test_start 00:05:40.138 oneshot 00:05:40.138 tick 100 00:05:40.138 tick 100 00:05:40.138 tick 250 00:05:40.138 tick 100 00:05:40.138 tick 100 00:05:40.138 tick 100 00:05:40.138 tick 250 00:05:40.138 tick 500 00:05:40.138 tick 100 00:05:40.138 tick 100 00:05:40.138 tick 250 00:05:40.138 tick 100 00:05:40.138 tick 100 00:05:40.138 test_end 00:05:40.138 00:05:40.138 real 0m1.208s 00:05:40.138 user 0m1.135s 00:05:40.138 sys 0m0.069s 00:05:40.138 16:33:53 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.138 16:33:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:40.138 ************************************ 00:05:40.138 END TEST event_reactor 00:05:40.138 ************************************ 00:05:40.138 16:33:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.138 16:33:53 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:40.138 16:33:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.138 16:33:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.138 ************************************ 00:05:40.138 START TEST event_reactor_perf 00:05:40.138 ************************************ 00:05:40.138 16:33:53 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.461 [2024-10-17 16:33:53.830568] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:40.461 [2024-10-17 16:33:53.830630] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231948 ] 00:05:40.461 [2024-10-17 16:33:53.893160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.461 [2024-10-17 16:33:53.957574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.395 test_start 00:05:41.395 test_end 00:05:41.395 Performance: 355862 events per second 00:05:41.395 00:05:41.395 real 0m1.207s 00:05:41.395 user 0m1.137s 00:05:41.395 sys 0m0.065s 00:05:41.395 16:33:55 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.395 16:33:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.395 ************************************ 00:05:41.395 END TEST event_reactor_perf 00:05:41.395 ************************************ 00:05:41.395 16:33:55 event -- event/event.sh@49 -- # uname -s 00:05:41.395 16:33:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:41.395 16:33:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.395 16:33:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.395 16:33:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.395 16:33:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.395 ************************************ 00:05:41.395 START TEST event_scheduler 00:05:41.395 ************************************ 00:05:41.395 16:33:55 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.655 * Looking for test storage... 00:05:41.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.655 16:33:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:41.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.655 --rc genhtml_branch_coverage=1 00:05:41.655 --rc genhtml_function_coverage=1 00:05:41.655 --rc genhtml_legend=1 00:05:41.655 --rc geninfo_all_blocks=1 00:05:41.655 --rc geninfo_unexecuted_blocks=1 00:05:41.655 00:05:41.655 ' 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:41.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.655 --rc genhtml_branch_coverage=1 00:05:41.655 --rc genhtml_function_coverage=1 00:05:41.655 --rc genhtml_legend=1 00:05:41.655 --rc geninfo_all_blocks=1 00:05:41.655 --rc geninfo_unexecuted_blocks=1 00:05:41.655 00:05:41.655 ' 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:41.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.655 --rc genhtml_branch_coverage=1 00:05:41.655 --rc genhtml_function_coverage=1 00:05:41.655 --rc genhtml_legend=1 00:05:41.655 --rc geninfo_all_blocks=1 00:05:41.655 --rc geninfo_unexecuted_blocks=1 00:05:41.655 00:05:41.655 ' 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:41.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.655 --rc genhtml_branch_coverage=1 00:05:41.655 --rc genhtml_function_coverage=1 00:05:41.655 --rc genhtml_legend=1 00:05:41.655 --rc geninfo_all_blocks=1 00:05:41.655 --rc geninfo_unexecuted_blocks=1 00:05:41.655 00:05:41.655 ' 00:05:41.655 16:33:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:41.655 16:33:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2232149 00:05:41.655 16:33:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:41.655 16:33:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.655 16:33:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2232149 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2232149 ']' 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.655 16:33:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.655 [2024-10-17 16:33:55.256154] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:41.655 [2024-10-17 16:33:55.256238] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2232149 ] 00:05:41.655 [2024-10-17 16:33:55.313856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.912 [2024-10-17 16:33:55.376404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.912 [2024-10-17 16:33:55.376466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.912 [2024-10-17 16:33:55.376532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.912 [2024-10-17 16:33:55.376535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.912 16:33:55 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.912 16:33:55 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:41.912 16:33:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:41.912 16:33:55 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.912 16:33:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.912 [2024-10-17 16:33:55.485417] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:41.912 [2024-10-17 16:33:55.485442] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:41.912 [2024-10-17 16:33:55.485473] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:41.912 [2024-10-17 16:33:55.485484] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:41.912 [2024-10-17 16:33:55.485495] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:41.912 16:33:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.912 16:33:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:41.912 16:33:55 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.912 16:33:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.912 [2024-10-17 16:33:55.584079] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:41.912 16:33:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.912 16:33:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:41.912 16:33:55 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.912 16:33:55 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.912 16:33:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 ************************************ 00:05:42.171 START TEST scheduler_create_thread 00:05:42.171 ************************************ 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 2 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 3 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 4 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 5 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 6 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 7 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 8 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 9 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 10 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.171 16:33:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.738 16:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.738 00:05:42.738 real 0m0.590s 00:05:42.738 user 0m0.012s 00:05:42.738 sys 0m0.003s 00:05:42.738 16:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.738 16:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.738 ************************************ 00:05:42.738 END TEST scheduler_create_thread 00:05:42.738 ************************************ 00:05:42.738 16:33:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:42.738 16:33:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2232149 00:05:42.738 16:33:56 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2232149 ']' 00:05:42.738 16:33:56 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2232149 00:05:42.738 16:33:56 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:42.738 16:33:56 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.738 16:33:56 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2232149 00:05:42.738 16:33:56 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:42.738 16:33:56 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:42.738 16:33:56 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2232149' 00:05:42.738 killing process with pid 2232149 00:05:42.738 16:33:56 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2232149 00:05:42.738 16:33:56 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2232149 00:05:42.996 [2024-10-17 16:33:56.684234] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:43.254 00:05:43.254 real 0m1.819s 00:05:43.254 user 0m2.495s 00:05:43.254 sys 0m0.336s 00:05:43.254 16:33:56 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.254 16:33:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.254 ************************************ 00:05:43.254 END TEST event_scheduler 00:05:43.254 ************************************ 00:05:43.254 16:33:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:43.254 16:33:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:43.254 16:33:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.254 16:33:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.254 16:33:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.513 ************************************ 00:05:43.513 START TEST app_repeat 00:05:43.513 ************************************ 00:05:43.513 16:33:56 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2232453 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2232453' 00:05:43.513 Process app_repeat pid: 2232453 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:43.513 spdk_app_start Round 0 00:05:43.513 16:33:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2232453 /var/tmp/spdk-nbd.sock 00:05:43.513 16:33:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2232453 ']' 00:05:43.513 16:33:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.513 16:33:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.513 16:33:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.513 16:33:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.513 16:33:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.513 [2024-10-17 16:33:56.968410] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:05:43.513 [2024-10-17 16:33:56.968475] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2232453 ] 00:05:43.513 [2024-10-17 16:33:57.030617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.513 [2024-10-17 16:33:57.094403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.513 [2024-10-17 16:33:57.094409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.770 16:33:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.770 16:33:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:43.770 16:33:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.028 Malloc0 00:05:44.028 16:33:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.287 Malloc1 00:05:44.287 16:33:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.287 16:33:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.545 /dev/nbd0 00:05:44.545 16:33:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.545 16:33:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.545 1+0 records in 00:05:44.545 1+0 records out 00:05:44.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160883 s, 25.5 MB/s 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.545 16:33:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:44.545 16:33:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.545 16:33:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.545 16:33:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.804 /dev/nbd1 00:05:44.804 16:33:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.804 16:33:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.804 1+0 records in 00:05:44.804 1+0 records out 00:05:44.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176415 s, 23.2 MB/s 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.804 16:33:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:44.804 16:33:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.804 16:33:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.804 16:33:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.804 16:33:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.804 16:33:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.062 16:33:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.062 { 00:05:45.062 "nbd_device": "/dev/nbd0", 00:05:45.062 "bdev_name": "Malloc0" 00:05:45.062 }, 00:05:45.062 { 00:05:45.062 "nbd_device": "/dev/nbd1", 00:05:45.062 "bdev_name": "Malloc1" 00:05:45.062 } 00:05:45.062 ]' 00:05:45.062 16:33:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.062 { 00:05:45.062 "nbd_device": "/dev/nbd0", 00:05:45.062 "bdev_name": "Malloc0" 00:05:45.062 }, 00:05:45.062 { 00:05:45.062 "nbd_device": "/dev/nbd1", 00:05:45.062 "bdev_name": "Malloc1" 00:05:45.062 } 00:05:45.062 ]' 00:05:45.062 16:33:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.320 16:33:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.321 /dev/nbd1' 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.321 /dev/nbd1' 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.321 256+0 records in 00:05:45.321 256+0 records out 00:05:45.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532946 s, 197 MB/s 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.321 256+0 records in 00:05:45.321 256+0 records out 00:05:45.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201536 s, 52.0 MB/s 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.321 256+0 records in 00:05:45.321 256+0 records out 00:05:45.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243652 s, 43.0 MB/s 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.321 16:33:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.579 16:33:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.579 16:33:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.579 16:33:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.579 16:33:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.579 16:33:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.579 16:33:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.579 16:33:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.579 16:33:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.579 16:33:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.579 16:33:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.837 16:33:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.837 16:33:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.837 16:33:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.837 16:33:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.837 16:33:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.837 16:33:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.837 16:33:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.837 16:33:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.837 16:33:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.837 16:33:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.837 16:33:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.095 16:33:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.095 16:33:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.095 16:33:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.095 16:33:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.095 16:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.095 16:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.095 16:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.095 16:33:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.095 16:33:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.095 16:33:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.095 16:33:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.095 16:33:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.095 16:33:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.661 16:34:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.661 [2024-10-17 16:34:00.270370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.661 [2024-10-17 16:34:00.332465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.661 [2024-10-17 16:34:00.332465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.919 [2024-10-17 16:34:00.394315] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.919 [2024-10-17 16:34:00.394408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.445 16:34:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.445 16:34:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:49.445 spdk_app_start Round 1 00:05:49.445 16:34:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2232453 /var/tmp/spdk-nbd.sock 00:05:49.445 16:34:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2232453 ']' 00:05:49.445 16:34:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.445 16:34:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.445 16:34:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.445 16:34:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.446 16:34:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.703 16:34:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.703 16:34:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:49.703 16:34:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.961 Malloc0 00:05:49.961 16:34:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.220 Malloc1 00:05:50.220 16:34:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.220 16:34:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.786 /dev/nbd0 00:05:50.786 16:34:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.786 16:34:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.786 1+0 records in 00:05:50.786 1+0 records out 00:05:50.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019276 s, 21.2 MB/s 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:50.786 16:34:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:50.786 16:34:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.786 16:34:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.786 16:34:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.045 /dev/nbd1 00:05:51.045 16:34:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.045 16:34:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.045 1+0 records in 00:05:51.045 1+0 records out 00:05:51.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198063 s, 20.7 MB/s 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:51.045 16:34:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:51.045 16:34:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.045 16:34:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.045 16:34:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.045 16:34:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.045 16:34:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.303 { 00:05:51.303 "nbd_device": "/dev/nbd0", 00:05:51.303 "bdev_name": "Malloc0" 00:05:51.303 }, 00:05:51.303 { 00:05:51.303 "nbd_device": "/dev/nbd1", 00:05:51.303 "bdev_name": "Malloc1" 00:05:51.303 } 00:05:51.303 ]' 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.303 { 00:05:51.303 "nbd_device": "/dev/nbd0", 00:05:51.303 "bdev_name": "Malloc0" 00:05:51.303 }, 00:05:51.303 { 00:05:51.303 "nbd_device": "/dev/nbd1", 00:05:51.303 "bdev_name": "Malloc1" 00:05:51.303 } 00:05:51.303 ]' 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.303 /dev/nbd1' 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.303 /dev/nbd1' 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.303 256+0 records in 00:05:51.303 256+0 records out 00:05:51.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497523 s, 211 MB/s 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.303 256+0 records in 00:05:51.303 256+0 records out 00:05:51.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232897 s, 45.0 MB/s 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.303 256+0 records in 00:05:51.303 256+0 records out 00:05:51.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231786 s, 45.2 MB/s 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.303 16:34:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.304 16:34:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.561 16:34:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.561 16:34:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.561 16:34:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.561 16:34:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.561 16:34:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.561 16:34:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.561 16:34:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.561 16:34:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.561 16:34:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.561 16:34:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.127 16:34:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.385 16:34:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.385 16:34:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.385 16:34:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.385 16:34:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.385 16:34:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.385 16:34:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.385 16:34:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.385 16:34:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.385 16:34:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.385 16:34:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.643 16:34:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.901 [2024-10-17 16:34:06.362272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.901 [2024-10-17 16:34:06.424970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.901 [2024-10-17 16:34:06.424969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.901 [2024-10-17 16:34:06.483604] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.901 [2024-10-17 16:34:06.483676] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.184 16:34:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.184 16:34:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:56.184 spdk_app_start Round 2 00:05:56.184 16:34:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2232453 /var/tmp/spdk-nbd.sock 00:05:56.184 16:34:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2232453 ']' 00:05:56.184 16:34:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.184 16:34:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.184 16:34:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.184 16:34:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.184 16:34:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.184 16:34:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.184 16:34:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:56.184 16:34:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.184 Malloc0 00:05:56.184 16:34:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.443 Malloc1 00:05:56.443 16:34:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.443 16:34:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.701 /dev/nbd0 00:05:56.701 16:34:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.701 16:34:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.701 1+0 records in 00:05:56.701 1+0 records out 00:05:56.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186073 s, 22.0 MB/s 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:56.701 16:34:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:56.701 16:34:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.701 16:34:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.701 16:34:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.959 /dev/nbd1 00:05:56.959 16:34:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.959 16:34:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.959 16:34:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:56.959 16:34:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:56.959 16:34:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:56.959 16:34:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:56.959 16:34:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:56.959 16:34:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:56.959 16:34:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:56.959 16:34:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:56.959 16:34:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.959 1+0 records in 00:05:56.959 1+0 records out 00:05:56.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210407 s, 19.5 MB/s 00:05:56.959 16:34:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.216 16:34:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:57.216 16:34:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.216 16:34:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:57.216 16:34:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:57.216 16:34:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.216 16:34:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.216 16:34:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.216 16:34:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.216 16:34:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.472 { 00:05:57.472 "nbd_device": "/dev/nbd0", 00:05:57.472 "bdev_name": "Malloc0" 00:05:57.472 }, 00:05:57.472 { 00:05:57.472 "nbd_device": "/dev/nbd1", 00:05:57.472 "bdev_name": "Malloc1" 00:05:57.472 } 00:05:57.472 ]' 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.472 { 00:05:57.472 "nbd_device": "/dev/nbd0", 00:05:57.472 "bdev_name": "Malloc0" 00:05:57.472 }, 00:05:57.472 { 00:05:57.472 "nbd_device": "/dev/nbd1", 00:05:57.472 "bdev_name": "Malloc1" 00:05:57.472 } 00:05:57.472 ]' 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.472 /dev/nbd1' 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.472 /dev/nbd1' 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.472 256+0 records in 00:05:57.472 256+0 records out 00:05:57.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509104 s, 206 MB/s 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.472 256+0 records in 00:05:57.472 256+0 records out 00:05:57.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192547 s, 54.5 MB/s 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.472 16:34:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.472 256+0 records in 00:05:57.472 256+0 records out 00:05:57.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241282 s, 43.5 MB/s 00:05:57.472 16:34:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.472 16:34:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.472 16:34:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.472 16:34:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.472 16:34:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.472 16:34:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.472 16:34:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.472 16:34:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.472 16:34:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.472 16:34:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.473 16:34:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.473 16:34:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.473 16:34:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.473 16:34:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.473 16:34:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.473 16:34:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.473 16:34:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:57.473 16:34:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.473 16:34:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.730 16:34:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.730 16:34:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.730 16:34:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.730 16:34:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.730 16:34:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.730 16:34:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.730 16:34:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.730 16:34:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.730 16:34:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.730 16:34:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.988 16:34:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.988 16:34:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.988 16:34:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.988 16:34:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.988 16:34:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.988 16:34:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.988 16:34:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.988 16:34:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.988 16:34:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.988 16:34:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.988 16:34:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.246 16:34:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.246 16:34:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.246 16:34:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.246 16:34:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.246 16:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.246 16:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.504 16:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:58.504 16:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.504 16:34:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.504 16:34:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.504 16:34:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.504 16:34:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.504 16:34:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.762 16:34:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.762 [2024-10-17 16:34:12.434385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.020 [2024-10-17 16:34:12.496612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.020 [2024-10-17 16:34:12.496617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.020 [2024-10-17 16:34:12.556722] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.020 [2024-10-17 16:34:12.556799] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.548 16:34:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2232453 /var/tmp/spdk-nbd.sock 00:06:01.548 16:34:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2232453 ']' 00:06:01.548 16:34:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.548 16:34:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.548 16:34:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.548 16:34:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.548 16:34:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.806 16:34:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.806 16:34:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:01.806 16:34:15 event.app_repeat -- event/event.sh@39 -- # killprocess 2232453 00:06:01.806 16:34:15 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2232453 ']' 00:06:01.806 16:34:15 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2232453 00:06:01.806 16:34:15 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:01.806 16:34:15 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.806 16:34:15 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2232453 00:06:02.065 16:34:15 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.065 16:34:15 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.065 16:34:15 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2232453' 00:06:02.065 killing process with pid 2232453 00:06:02.065 16:34:15 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2232453 00:06:02.065 16:34:15 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2232453 00:06:02.065 spdk_app_start is called in Round 0. 00:06:02.065 Shutdown signal received, stop current app iteration 00:06:02.065 Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 reinitialization... 00:06:02.065 spdk_app_start is called in Round 1. 00:06:02.065 Shutdown signal received, stop current app iteration 00:06:02.065 Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 reinitialization... 00:06:02.065 spdk_app_start is called in Round 2. 00:06:02.065 Shutdown signal received, stop current app iteration 00:06:02.065 Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 reinitialization... 00:06:02.065 spdk_app_start is called in Round 3. 00:06:02.065 Shutdown signal received, stop current app iteration 00:06:02.065 16:34:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:02.065 16:34:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:02.065 00:06:02.065 real 0m18.782s 00:06:02.065 user 0m41.514s 00:06:02.065 sys 0m3.235s 00:06:02.065 16:34:15 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.065 16:34:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.065 ************************************ 00:06:02.065 END TEST app_repeat 00:06:02.065 ************************************ 00:06:02.065 16:34:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:02.065 16:34:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:02.065 16:34:15 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.065 16:34:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.065 16:34:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.324 ************************************ 00:06:02.324 START TEST cpu_locks 00:06:02.324 ************************************ 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:02.324 * Looking for test storage... 00:06:02.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.324 16:34:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:02.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.324 --rc genhtml_branch_coverage=1 00:06:02.324 --rc genhtml_function_coverage=1 00:06:02.324 --rc genhtml_legend=1 00:06:02.324 --rc geninfo_all_blocks=1 00:06:02.324 --rc geninfo_unexecuted_blocks=1 00:06:02.324 00:06:02.324 ' 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:02.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.324 --rc genhtml_branch_coverage=1 00:06:02.324 --rc genhtml_function_coverage=1 00:06:02.324 --rc genhtml_legend=1 00:06:02.324 --rc geninfo_all_blocks=1 00:06:02.324 --rc geninfo_unexecuted_blocks=1 00:06:02.324 00:06:02.324 ' 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:02.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.324 --rc genhtml_branch_coverage=1 00:06:02.324 --rc genhtml_function_coverage=1 00:06:02.324 --rc genhtml_legend=1 00:06:02.324 --rc geninfo_all_blocks=1 00:06:02.324 --rc geninfo_unexecuted_blocks=1 00:06:02.324 00:06:02.324 ' 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:02.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.324 --rc genhtml_branch_coverage=1 00:06:02.324 --rc genhtml_function_coverage=1 00:06:02.324 --rc genhtml_legend=1 00:06:02.324 --rc geninfo_all_blocks=1 00:06:02.324 --rc geninfo_unexecuted_blocks=1 00:06:02.324 00:06:02.324 ' 00:06:02.324 16:34:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:02.324 16:34:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:02.324 16:34:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:02.324 16:34:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.324 16:34:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.324 ************************************ 00:06:02.324 START TEST default_locks 00:06:02.324 ************************************ 00:06:02.324 16:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:02.324 16:34:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2234827 00:06:02.324 16:34:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.324 16:34:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2234827 00:06:02.324 16:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2234827 ']' 00:06:02.324 16:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.324 16:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.324 16:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.324 16:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.324 16:34:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.324 [2024-10-17 16:34:16.003226] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:02.324 [2024-10-17 16:34:16.003340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234827 ] 00:06:02.583 [2024-10-17 16:34:16.064525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.583 [2024-10-17 16:34:16.125249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.842 16:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.842 16:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:02.842 16:34:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2234827 00:06:02.842 16:34:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2234827 00:06:02.842 16:34:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.099 lslocks: write error 00:06:03.099 16:34:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2234827 00:06:03.099 16:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2234827 ']' 00:06:03.099 16:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2234827 00:06:03.099 16:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:03.099 16:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.099 16:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2234827 00:06:03.357 16:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.357 16:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.357 16:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2234827' 00:06:03.357 killing process with pid 2234827 00:06:03.357 16:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2234827 00:06:03.357 16:34:16 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2234827 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2234827 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2234827 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2234827 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2234827 ']' 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2234827) - No such process 00:06:03.616 ERROR: process (pid: 2234827) is no longer running 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:03.616 00:06:03.616 real 0m1.308s 00:06:03.616 user 0m1.250s 00:06:03.616 sys 0m0.569s 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.616 16:34:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.616 ************************************ 00:06:03.616 END TEST default_locks 00:06:03.616 ************************************ 00:06:03.616 16:34:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:03.616 16:34:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.616 16:34:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.616 16:34:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.616 ************************************ 00:06:03.616 START TEST default_locks_via_rpc 00:06:03.616 ************************************ 00:06:03.616 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:03.616 16:34:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2235112 00:06:03.616 16:34:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.616 16:34:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2235112 00:06:03.616 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2235112 ']' 00:06:03.616 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.616 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.616 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.875 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.875 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.875 [2024-10-17 16:34:17.359199] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:03.875 [2024-10-17 16:34:17.359278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235112 ] 00:06:03.875 [2024-10-17 16:34:17.419386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.875 [2024-10-17 16:34:17.480126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2235112 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2235112 00:06:04.134 16:34:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.392 16:34:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2235112 00:06:04.392 16:34:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2235112 ']' 00:06:04.392 16:34:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2235112 00:06:04.392 16:34:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:04.392 16:34:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.392 16:34:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2235112 00:06:04.392 16:34:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.392 16:34:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.392 16:34:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2235112' 00:06:04.392 killing process with pid 2235112 00:06:04.392 16:34:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2235112 00:06:04.392 16:34:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2235112 00:06:04.958 00:06:04.958 real 0m1.185s 00:06:04.958 user 0m1.146s 00:06:04.958 sys 0m0.511s 00:06:04.958 16:34:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.958 16:34:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.958 ************************************ 00:06:04.958 END TEST default_locks_via_rpc 00:06:04.958 ************************************ 00:06:04.958 16:34:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:04.958 16:34:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.958 16:34:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.958 16:34:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.958 ************************************ 00:06:04.958 START TEST non_locking_app_on_locked_coremask 00:06:04.958 ************************************ 00:06:04.958 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:04.958 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2235272 00:06:04.958 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.958 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2235272 /var/tmp/spdk.sock 00:06:04.958 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2235272 ']' 00:06:04.958 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.958 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.958 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.958 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.958 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.958 [2024-10-17 16:34:18.595010] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:04.958 [2024-10-17 16:34:18.595111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235272 ] 00:06:05.216 [2024-10-17 16:34:18.652471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.216 [2024-10-17 16:34:18.713413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.475 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.475 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:05.475 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2235278 00:06:05.475 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:05.475 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2235278 /var/tmp/spdk2.sock 00:06:05.475 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2235278 ']' 00:06:05.475 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.475 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.475 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.475 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.475 16:34:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.475 [2024-10-17 16:34:19.048862] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:05.475 [2024-10-17 16:34:19.048947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235278 ] 00:06:05.475 [2024-10-17 16:34:19.137405] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.475 [2024-10-17 16:34:19.137439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.734 [2024-10-17 16:34:19.264107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.669 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.669 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:06.669 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2235272 00:06:06.669 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2235272 00:06:06.669 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.931 lslocks: write error 00:06:06.931 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2235272 00:06:06.931 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2235272 ']' 00:06:06.931 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2235272 00:06:06.931 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:06.931 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.931 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2235272 00:06:06.931 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.931 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.931 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2235272' 00:06:06.931 killing process with pid 2235272 00:06:06.931 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2235272 00:06:06.931 16:34:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2235272 00:06:07.924 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2235278 00:06:07.924 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2235278 ']' 00:06:07.924 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2235278 00:06:07.924 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:07.924 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.924 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2235278 00:06:07.924 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.924 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.924 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2235278' 00:06:07.924 killing process with pid 2235278 00:06:07.924 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2235278 00:06:07.924 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2235278 00:06:08.183 00:06:08.183 real 0m3.205s 00:06:08.183 user 0m3.411s 00:06:08.183 sys 0m1.025s 00:06:08.183 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.183 16:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.183 ************************************ 00:06:08.183 END TEST non_locking_app_on_locked_coremask 00:06:08.183 ************************************ 00:06:08.183 16:34:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:08.183 16:34:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.183 16:34:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.183 16:34:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.183 ************************************ 00:06:08.183 START TEST locking_app_on_unlocked_coremask 00:06:08.183 ************************************ 00:06:08.183 16:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:08.184 16:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2235709 00:06:08.184 16:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:08.184 16:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2235709 /var/tmp/spdk.sock 00:06:08.184 16:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2235709 ']' 00:06:08.184 16:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.184 16:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.184 16:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.184 16:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.184 16:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.184 [2024-10-17 16:34:21.850357] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:08.184 [2024-10-17 16:34:21.850450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235709 ] 00:06:08.442 [2024-10-17 16:34:21.911146] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.442 [2024-10-17 16:34:21.911184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.442 [2024-10-17 16:34:21.972348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.700 16:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.700 16:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.700 16:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2235720 00:06:08.700 16:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.700 16:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2235720 /var/tmp/spdk2.sock 00:06:08.700 16:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2235720 ']' 00:06:08.700 16:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.700 16:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.700 16:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.700 16:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.700 16:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.700 [2024-10-17 16:34:22.310194] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:08.700 [2024-10-17 16:34:22.310285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235720 ] 00:06:08.959 [2024-10-17 16:34:22.402339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.959 [2024-10-17 16:34:22.524842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.893 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.893 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:09.893 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2235720 00:06:09.893 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2235720 00:06:09.893 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.151 lslocks: write error 00:06:10.151 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2235709 00:06:10.151 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2235709 ']' 00:06:10.151 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2235709 00:06:10.151 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.151 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.151 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2235709 00:06:10.409 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.409 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.409 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2235709' 00:06:10.409 killing process with pid 2235709 00:06:10.409 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2235709 00:06:10.409 16:34:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2235709 00:06:11.344 16:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2235720 00:06:11.344 16:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2235720 ']' 00:06:11.344 16:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2235720 00:06:11.344 16:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:11.344 16:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.344 16:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2235720 00:06:11.344 16:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.344 16:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.344 16:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2235720' 00:06:11.344 killing process with pid 2235720 00:06:11.344 16:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2235720 00:06:11.344 16:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2235720 00:06:11.603 00:06:11.603 real 0m3.377s 00:06:11.603 user 0m3.569s 00:06:11.603 sys 0m1.073s 00:06:11.603 16:34:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.603 16:34:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.603 ************************************ 00:06:11.603 END TEST locking_app_on_unlocked_coremask 00:06:11.603 ************************************ 00:06:11.603 16:34:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:11.603 16:34:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.603 16:34:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.603 16:34:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.603 ************************************ 00:06:11.603 START TEST locking_app_on_locked_coremask 00:06:11.603 ************************************ 00:06:11.603 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:11.603 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2236151 00:06:11.603 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.603 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2236151 /var/tmp/spdk.sock 00:06:11.603 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2236151 ']' 00:06:11.603 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.603 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.603 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.603 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.603 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.603 [2024-10-17 16:34:25.281557] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:11.603 [2024-10-17 16:34:25.281653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236151 ] 00:06:11.862 [2024-10-17 16:34:25.342228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.862 [2024-10-17 16:34:25.403073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2236154 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2236154 /var/tmp/spdk2.sock 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2236154 /var/tmp/spdk2.sock 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2236154 /var/tmp/spdk2.sock 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2236154 ']' 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.121 16:34:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.121 [2024-10-17 16:34:25.740950] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:12.121 [2024-10-17 16:34:25.741051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236154 ] 00:06:12.379 [2024-10-17 16:34:25.836870] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2236151 has claimed it. 00:06:12.379 [2024-10-17 16:34:25.836936] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2236154) - No such process 00:06:12.944 ERROR: process (pid: 2236154) is no longer running 00:06:12.944 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.944 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:12.944 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:12.944 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.944 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.944 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.944 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2236151 00:06:12.944 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2236151 00:06:12.945 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.510 lslocks: write error 00:06:13.510 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2236151 00:06:13.510 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2236151 ']' 00:06:13.510 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2236151 00:06:13.510 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.510 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.510 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2236151 00:06:13.510 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.510 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.510 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2236151' 00:06:13.510 killing process with pid 2236151 00:06:13.510 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2236151 00:06:13.510 16:34:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2236151 00:06:13.768 00:06:13.768 real 0m2.154s 00:06:13.768 user 0m2.344s 00:06:13.768 sys 0m0.663s 00:06:13.768 16:34:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.768 16:34:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.768 ************************************ 00:06:13.768 END TEST locking_app_on_locked_coremask 00:06:13.768 ************************************ 00:06:13.768 16:34:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:13.768 16:34:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.768 16:34:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.768 16:34:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.768 ************************************ 00:06:13.768 START TEST locking_overlapped_coremask 00:06:13.768 ************************************ 00:06:13.768 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:13.768 16:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2236447 00:06:13.768 16:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:13.768 16:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2236447 /var/tmp/spdk.sock 00:06:13.768 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2236447 ']' 00:06:13.768 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.768 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.768 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.768 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.768 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.026 [2024-10-17 16:34:27.487199] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:14.026 [2024-10-17 16:34:27.487278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236447 ] 00:06:14.026 [2024-10-17 16:34:27.548389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.026 [2024-10-17 16:34:27.612487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.026 [2024-10-17 16:34:27.612538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.026 [2024-10-17 16:34:27.612557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2236452 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2236452 /var/tmp/spdk2.sock 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2236452 /var/tmp/spdk2.sock 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2236452 /var/tmp/spdk2.sock 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2236452 ']' 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.284 16:34:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.284 [2024-10-17 16:34:27.934163] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:14.284 [2024-10-17 16:34:27.934249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236452 ] 00:06:14.542 [2024-10-17 16:34:28.019738] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2236447 has claimed it. 00:06:14.542 [2024-10-17 16:34:28.019810] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2236452) - No such process 00:06:15.108 ERROR: process (pid: 2236452) is no longer running 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2236447 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2236447 ']' 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2236447 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2236447 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2236447' 00:06:15.108 killing process with pid 2236447 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2236447 00:06:15.108 16:34:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2236447 00:06:15.677 00:06:15.677 real 0m1.679s 00:06:15.677 user 0m4.682s 00:06:15.677 sys 0m0.442s 00:06:15.677 16:34:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.677 16:34:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.677 ************************************ 00:06:15.677 END TEST locking_overlapped_coremask 00:06:15.677 ************************************ 00:06:15.677 16:34:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:15.677 16:34:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.677 16:34:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.677 16:34:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.677 ************************************ 00:06:15.677 START TEST locking_overlapped_coremask_via_rpc 00:06:15.677 ************************************ 00:06:15.677 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:15.677 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2236622 00:06:15.677 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:15.677 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2236622 /var/tmp/spdk.sock 00:06:15.677 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2236622 ']' 00:06:15.677 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.677 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.677 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.677 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.677 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.677 [2024-10-17 16:34:29.222515] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:15.677 [2024-10-17 16:34:29.222613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236622 ] 00:06:15.677 [2024-10-17 16:34:29.279751] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.677 [2024-10-17 16:34:29.279792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.677 [2024-10-17 16:34:29.341088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.677 [2024-10-17 16:34:29.341143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.677 [2024-10-17 16:34:29.341147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.936 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.936 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:15.936 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2236752 00:06:15.936 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2236752 /var/tmp/spdk2.sock 00:06:15.936 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2236752 ']' 00:06:15.936 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.936 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.936 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:15.936 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.936 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.936 16:34:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.195 [2024-10-17 16:34:29.649740] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:16.195 [2024-10-17 16:34:29.649827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236752 ] 00:06:16.195 [2024-10-17 16:34:29.736591] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.195 [2024-10-17 16:34:29.736624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.195 [2024-10-17 16:34:29.857248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.195 [2024-10-17 16:34:29.857305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:16.195 [2024-10-17 16:34:29.857308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.130 [2024-10-17 16:34:30.667099] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2236622 has claimed it. 00:06:17.130 request: 00:06:17.130 { 00:06:17.130 "method": "framework_enable_cpumask_locks", 00:06:17.130 "req_id": 1 00:06:17.130 } 00:06:17.130 Got JSON-RPC error response 00:06:17.130 response: 00:06:17.130 { 00:06:17.130 "code": -32603, 00:06:17.130 "message": "Failed to claim CPU core: 2" 00:06:17.130 } 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2236622 /var/tmp/spdk.sock 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2236622 ']' 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.130 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.387 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.387 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.388 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2236752 /var/tmp/spdk2.sock 00:06:17.388 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2236752 ']' 00:06:17.388 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.388 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.388 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.388 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.388 16:34:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.646 16:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.646 16:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.646 16:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:17.646 16:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.646 16:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.646 16:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.646 00:06:17.646 real 0m2.049s 00:06:17.646 user 0m1.157s 00:06:17.646 sys 0m0.173s 00:06:17.646 16:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.646 16:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.646 ************************************ 00:06:17.646 END TEST locking_overlapped_coremask_via_rpc 00:06:17.646 ************************************ 00:06:17.646 16:34:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:17.646 16:34:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2236622 ]] 00:06:17.646 16:34:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2236622 00:06:17.646 16:34:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2236622 ']' 00:06:17.646 16:34:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2236622 00:06:17.646 16:34:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:17.646 16:34:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.646 16:34:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2236622 00:06:17.646 16:34:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.646 16:34:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.646 16:34:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2236622' 00:06:17.646 killing process with pid 2236622 00:06:17.646 16:34:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2236622 00:06:17.646 16:34:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2236622 00:06:18.212 16:34:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2236752 ]] 00:06:18.212 16:34:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2236752 00:06:18.212 16:34:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2236752 ']' 00:06:18.212 16:34:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2236752 00:06:18.212 16:34:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:18.212 16:34:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.212 16:34:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2236752 00:06:18.212 16:34:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:18.212 16:34:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:18.212 16:34:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2236752' 00:06:18.212 killing process with pid 2236752 00:06:18.212 16:34:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2236752 00:06:18.212 16:34:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2236752 00:06:18.471 16:34:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.471 16:34:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:18.471 16:34:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2236622 ]] 00:06:18.471 16:34:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2236622 00:06:18.471 16:34:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2236622 ']' 00:06:18.471 16:34:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2236622 00:06:18.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2236622) - No such process 00:06:18.471 16:34:32 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2236622 is not found' 00:06:18.471 Process with pid 2236622 is not found 00:06:18.471 16:34:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2236752 ]] 00:06:18.471 16:34:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2236752 00:06:18.471 16:34:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2236752 ']' 00:06:18.471 16:34:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2236752 00:06:18.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2236752) - No such process 00:06:18.471 16:34:32 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2236752 is not found' 00:06:18.471 Process with pid 2236752 is not found 00:06:18.471 16:34:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.730 00:06:18.730 real 0m16.389s 00:06:18.730 user 0m29.300s 00:06:18.730 sys 0m5.416s 00:06:18.730 16:34:32 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.730 16:34:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.730 ************************************ 00:06:18.730 END TEST cpu_locks 00:06:18.730 ************************************ 00:06:18.730 00:06:18.730 real 0m41.040s 00:06:18.730 user 1m19.912s 00:06:18.730 sys 0m9.445s 00:06:18.730 16:34:32 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.730 16:34:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.730 ************************************ 00:06:18.730 END TEST event 00:06:18.730 ************************************ 00:06:18.730 16:34:32 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:18.730 16:34:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.730 16:34:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.730 16:34:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.730 ************************************ 00:06:18.730 START TEST thread 00:06:18.730 ************************************ 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:18.730 * Looking for test storage... 00:06:18.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:18.730 16:34:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.730 16:34:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.730 16:34:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.730 16:34:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.730 16:34:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.730 16:34:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.730 16:34:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.730 16:34:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.730 16:34:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.730 16:34:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.730 16:34:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.730 16:34:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:18.730 16:34:32 thread -- scripts/common.sh@345 -- # : 1 00:06:18.730 16:34:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.730 16:34:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.730 16:34:32 thread -- scripts/common.sh@365 -- # decimal 1 00:06:18.730 16:34:32 thread -- scripts/common.sh@353 -- # local d=1 00:06:18.730 16:34:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.730 16:34:32 thread -- scripts/common.sh@355 -- # echo 1 00:06:18.730 16:34:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.730 16:34:32 thread -- scripts/common.sh@366 -- # decimal 2 00:06:18.730 16:34:32 thread -- scripts/common.sh@353 -- # local d=2 00:06:18.730 16:34:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.730 16:34:32 thread -- scripts/common.sh@355 -- # echo 2 00:06:18.730 16:34:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.730 16:34:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.730 16:34:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.730 16:34:32 thread -- scripts/common.sh@368 -- # return 0 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:18.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.730 --rc genhtml_branch_coverage=1 00:06:18.730 --rc genhtml_function_coverage=1 00:06:18.730 --rc genhtml_legend=1 00:06:18.730 --rc geninfo_all_blocks=1 00:06:18.730 --rc geninfo_unexecuted_blocks=1 00:06:18.730 00:06:18.730 ' 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:18.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.730 --rc genhtml_branch_coverage=1 00:06:18.730 --rc genhtml_function_coverage=1 00:06:18.730 --rc genhtml_legend=1 00:06:18.730 --rc geninfo_all_blocks=1 00:06:18.730 --rc geninfo_unexecuted_blocks=1 00:06:18.730 00:06:18.730 ' 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:18.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.730 --rc genhtml_branch_coverage=1 00:06:18.730 --rc genhtml_function_coverage=1 00:06:18.730 --rc genhtml_legend=1 00:06:18.730 --rc geninfo_all_blocks=1 00:06:18.730 --rc geninfo_unexecuted_blocks=1 00:06:18.730 00:06:18.730 ' 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:18.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.730 --rc genhtml_branch_coverage=1 00:06:18.730 --rc genhtml_function_coverage=1 00:06:18.730 --rc genhtml_legend=1 00:06:18.730 --rc geninfo_all_blocks=1 00:06:18.730 --rc geninfo_unexecuted_blocks=1 00:06:18.730 00:06:18.730 ' 00:06:18.730 16:34:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.730 16:34:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.730 ************************************ 00:06:18.730 START TEST thread_poller_perf 00:06:18.730 ************************************ 00:06:18.730 16:34:32 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.730 [2024-10-17 16:34:32.416546] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:18.730 [2024-10-17 16:34:32.416620] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237129 ] 00:06:18.989 [2024-10-17 16:34:32.478184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.989 [2024-10-17 16:34:32.540886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.989 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:20.364 [2024-10-17T14:34:34.054Z] ====================================== 00:06:20.364 [2024-10-17T14:34:34.054Z] busy:2709531416 (cyc) 00:06:20.364 [2024-10-17T14:34:34.054Z] total_run_count: 292000 00:06:20.364 [2024-10-17T14:34:34.054Z] tsc_hz: 2700000000 (cyc) 00:06:20.364 [2024-10-17T14:34:34.054Z] ====================================== 00:06:20.364 [2024-10-17T14:34:34.054Z] poller_cost: 9279 (cyc), 3436 (nsec) 00:06:20.364 00:06:20.364 real 0m1.214s 00:06:20.364 user 0m1.140s 00:06:20.364 sys 0m0.068s 00:06:20.364 16:34:33 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.364 16:34:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.364 ************************************ 00:06:20.364 END TEST thread_poller_perf 00:06:20.364 ************************************ 00:06:20.364 16:34:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.364 16:34:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:20.364 16:34:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.364 16:34:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.364 ************************************ 00:06:20.364 START TEST thread_poller_perf 00:06:20.364 ************************************ 00:06:20.364 16:34:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.364 [2024-10-17 16:34:33.684468] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:20.364 [2024-10-17 16:34:33.684534] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237282 ] 00:06:20.364 [2024-10-17 16:34:33.750828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.364 [2024-10-17 16:34:33.813524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.364 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:21.299 [2024-10-17T14:34:34.989Z] ====================================== 00:06:21.299 [2024-10-17T14:34:34.989Z] busy:2702473074 (cyc) 00:06:21.299 [2024-10-17T14:34:34.989Z] total_run_count: 3853000 00:06:21.299 [2024-10-17T14:34:34.989Z] tsc_hz: 2700000000 (cyc) 00:06:21.299 [2024-10-17T14:34:34.989Z] ====================================== 00:06:21.299 [2024-10-17T14:34:34.989Z] poller_cost: 701 (cyc), 259 (nsec) 00:06:21.299 00:06:21.299 real 0m1.215s 00:06:21.299 user 0m1.138s 00:06:21.299 sys 0m0.071s 00:06:21.299 16:34:34 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.299 16:34:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.299 ************************************ 00:06:21.299 END TEST thread_poller_perf 00:06:21.299 ************************************ 00:06:21.299 16:34:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:21.299 00:06:21.299 real 0m2.671s 00:06:21.299 user 0m2.411s 00:06:21.299 sys 0m0.263s 00:06:21.299 16:34:34 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.299 16:34:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.299 ************************************ 00:06:21.299 END TEST thread 00:06:21.299 ************************************ 00:06:21.299 16:34:34 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:21.299 16:34:34 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.299 16:34:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.299 16:34:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.299 16:34:34 -- common/autotest_common.sh@10 -- # set +x 00:06:21.299 ************************************ 00:06:21.299 START TEST app_cmdline 00:06:21.299 ************************************ 00:06:21.299 16:34:34 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.558 * Looking for test storage... 00:06:21.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:21.558 16:34:34 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.558 16:34:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:21.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.558 --rc genhtml_branch_coverage=1 00:06:21.558 --rc genhtml_function_coverage=1 00:06:21.558 --rc genhtml_legend=1 00:06:21.558 --rc geninfo_all_blocks=1 00:06:21.558 --rc geninfo_unexecuted_blocks=1 00:06:21.558 00:06:21.558 ' 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:21.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.558 --rc genhtml_branch_coverage=1 00:06:21.558 --rc genhtml_function_coverage=1 00:06:21.558 --rc genhtml_legend=1 00:06:21.558 --rc geninfo_all_blocks=1 00:06:21.558 --rc geninfo_unexecuted_blocks=1 00:06:21.558 00:06:21.558 ' 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:21.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.558 --rc genhtml_branch_coverage=1 00:06:21.558 --rc genhtml_function_coverage=1 00:06:21.558 --rc genhtml_legend=1 00:06:21.558 --rc geninfo_all_blocks=1 00:06:21.558 --rc geninfo_unexecuted_blocks=1 00:06:21.558 00:06:21.558 ' 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:21.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.558 --rc genhtml_branch_coverage=1 00:06:21.558 --rc genhtml_function_coverage=1 00:06:21.558 --rc genhtml_legend=1 00:06:21.558 --rc geninfo_all_blocks=1 00:06:21.558 --rc geninfo_unexecuted_blocks=1 00:06:21.558 00:06:21.558 ' 00:06:21.558 16:34:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:21.558 16:34:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2237603 00:06:21.558 16:34:35 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:21.558 16:34:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2237603 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2237603 ']' 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.558 16:34:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.558 [2024-10-17 16:34:35.142281] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:21.558 [2024-10-17 16:34:35.142394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237603 ] 00:06:21.558 [2024-10-17 16:34:35.199316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.817 [2024-10-17 16:34:35.259730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.075 16:34:35 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.075 16:34:35 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:22.075 16:34:35 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:22.333 { 00:06:22.333 "version": "SPDK v25.01-pre git sha1 767a69c7c", 00:06:22.333 "fields": { 00:06:22.333 "major": 25, 00:06:22.333 "minor": 1, 00:06:22.333 "patch": 0, 00:06:22.333 "suffix": "-pre", 00:06:22.333 "commit": "767a69c7c" 00:06:22.333 } 00:06:22.333 } 00:06:22.333 16:34:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:22.333 16:34:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:22.333 16:34:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:22.333 16:34:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:22.333 16:34:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.333 16:34:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:22.333 16:34:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.333 16:34:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:22.333 16:34:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:22.333 16:34:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:22.333 16:34:35 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.591 request: 00:06:22.591 { 00:06:22.591 "method": "env_dpdk_get_mem_stats", 00:06:22.591 "req_id": 1 00:06:22.591 } 00:06:22.591 Got JSON-RPC error response 00:06:22.591 response: 00:06:22.591 { 00:06:22.591 "code": -32601, 00:06:22.591 "message": "Method not found" 00:06:22.591 } 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.591 16:34:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2237603 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2237603 ']' 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2237603 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2237603 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2237603' 00:06:22.591 killing process with pid 2237603 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@969 -- # kill 2237603 00:06:22.591 16:34:36 app_cmdline -- common/autotest_common.sh@974 -- # wait 2237603 00:06:23.158 00:06:23.158 real 0m1.611s 00:06:23.158 user 0m1.974s 00:06:23.158 sys 0m0.495s 00:06:23.158 16:34:36 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.158 16:34:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.158 ************************************ 00:06:23.158 END TEST app_cmdline 00:06:23.158 ************************************ 00:06:23.158 16:34:36 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.158 16:34:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.158 16:34:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.158 16:34:36 -- common/autotest_common.sh@10 -- # set +x 00:06:23.158 ************************************ 00:06:23.158 START TEST version 00:06:23.158 ************************************ 00:06:23.158 16:34:36 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.158 * Looking for test storage... 00:06:23.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:23.158 16:34:36 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.158 16:34:36 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.158 16:34:36 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.158 16:34:36 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.158 16:34:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.158 16:34:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.158 16:34:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.158 16:34:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.158 16:34:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.158 16:34:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.158 16:34:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.158 16:34:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.158 16:34:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.158 16:34:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.158 16:34:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.158 16:34:36 version -- scripts/common.sh@344 -- # case "$op" in 00:06:23.158 16:34:36 version -- scripts/common.sh@345 -- # : 1 00:06:23.158 16:34:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.158 16:34:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.158 16:34:36 version -- scripts/common.sh@365 -- # decimal 1 00:06:23.158 16:34:36 version -- scripts/common.sh@353 -- # local d=1 00:06:23.158 16:34:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.158 16:34:36 version -- scripts/common.sh@355 -- # echo 1 00:06:23.158 16:34:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.158 16:34:36 version -- scripts/common.sh@366 -- # decimal 2 00:06:23.158 16:34:36 version -- scripts/common.sh@353 -- # local d=2 00:06:23.158 16:34:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.158 16:34:36 version -- scripts/common.sh@355 -- # echo 2 00:06:23.158 16:34:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.158 16:34:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.158 16:34:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.158 16:34:36 version -- scripts/common.sh@368 -- # return 0 00:06:23.158 16:34:36 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.158 16:34:36 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.158 --rc genhtml_branch_coverage=1 00:06:23.158 --rc genhtml_function_coverage=1 00:06:23.158 --rc genhtml_legend=1 00:06:23.158 --rc geninfo_all_blocks=1 00:06:23.158 --rc geninfo_unexecuted_blocks=1 00:06:23.158 00:06:23.158 ' 00:06:23.158 16:34:36 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.158 --rc genhtml_branch_coverage=1 00:06:23.158 --rc genhtml_function_coverage=1 00:06:23.158 --rc genhtml_legend=1 00:06:23.158 --rc geninfo_all_blocks=1 00:06:23.158 --rc geninfo_unexecuted_blocks=1 00:06:23.158 00:06:23.158 ' 00:06:23.158 16:34:36 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.158 --rc genhtml_branch_coverage=1 00:06:23.158 --rc genhtml_function_coverage=1 00:06:23.158 --rc genhtml_legend=1 00:06:23.158 --rc geninfo_all_blocks=1 00:06:23.158 --rc geninfo_unexecuted_blocks=1 00:06:23.158 00:06:23.158 ' 00:06:23.158 16:34:36 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.158 --rc genhtml_branch_coverage=1 00:06:23.159 --rc genhtml_function_coverage=1 00:06:23.159 --rc genhtml_legend=1 00:06:23.159 --rc geninfo_all_blocks=1 00:06:23.159 --rc geninfo_unexecuted_blocks=1 00:06:23.159 00:06:23.159 ' 00:06:23.159 16:34:36 version -- app/version.sh@17 -- # get_header_version major 00:06:23.159 16:34:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.159 16:34:36 version -- app/version.sh@14 -- # cut -f2 00:06:23.159 16:34:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.159 16:34:36 version -- app/version.sh@17 -- # major=25 00:06:23.159 16:34:36 version -- app/version.sh@18 -- # get_header_version minor 00:06:23.159 16:34:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.159 16:34:36 version -- app/version.sh@14 -- # cut -f2 00:06:23.159 16:34:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.159 16:34:36 version -- app/version.sh@18 -- # minor=1 00:06:23.159 16:34:36 version -- app/version.sh@19 -- # get_header_version patch 00:06:23.159 16:34:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.159 16:34:36 version -- app/version.sh@14 -- # cut -f2 00:06:23.159 16:34:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.159 16:34:36 version -- app/version.sh@19 -- # patch=0 00:06:23.159 16:34:36 version -- app/version.sh@20 -- # get_header_version suffix 00:06:23.159 16:34:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.159 16:34:36 version -- app/version.sh@14 -- # cut -f2 00:06:23.159 16:34:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.159 16:34:36 version -- app/version.sh@20 -- # suffix=-pre 00:06:23.159 16:34:36 version -- app/version.sh@22 -- # version=25.1 00:06:23.159 16:34:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:23.159 16:34:36 version -- app/version.sh@28 -- # version=25.1rc0 00:06:23.159 16:34:36 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:23.159 16:34:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:23.159 16:34:36 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:23.159 16:34:36 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:23.159 00:06:23.159 real 0m0.195s 00:06:23.159 user 0m0.126s 00:06:23.159 sys 0m0.094s 00:06:23.159 16:34:36 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.159 16:34:36 version -- common/autotest_common.sh@10 -- # set +x 00:06:23.159 ************************************ 00:06:23.159 END TEST version 00:06:23.159 ************************************ 00:06:23.159 16:34:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:23.159 16:34:36 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:23.159 16:34:36 -- spdk/autotest.sh@194 -- # uname -s 00:06:23.159 16:34:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:23.159 16:34:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:23.159 16:34:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:23.159 16:34:36 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:23.159 16:34:36 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:23.159 16:34:36 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:23.159 16:34:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.159 16:34:36 -- common/autotest_common.sh@10 -- # set +x 00:06:23.417 16:34:36 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:23.417 16:34:36 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:23.417 16:34:36 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:23.417 16:34:36 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:23.417 16:34:36 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:23.417 16:34:36 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:23.417 16:34:36 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.417 16:34:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.417 16:34:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.417 16:34:36 -- common/autotest_common.sh@10 -- # set +x 00:06:23.417 ************************************ 00:06:23.417 START TEST nvmf_tcp 00:06:23.417 ************************************ 00:06:23.417 16:34:36 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.417 * Looking for test storage... 00:06:23.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:23.417 16:34:36 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.417 16:34:36 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.417 16:34:36 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.417 16:34:37 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.417 16:34:37 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.417 16:34:37 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.417 16:34:37 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.418 16:34:37 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:23.418 16:34:37 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.418 16:34:37 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.418 --rc genhtml_branch_coverage=1 00:06:23.418 --rc genhtml_function_coverage=1 00:06:23.418 --rc genhtml_legend=1 00:06:23.418 --rc geninfo_all_blocks=1 00:06:23.418 --rc geninfo_unexecuted_blocks=1 00:06:23.418 00:06:23.418 ' 00:06:23.418 16:34:37 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.418 --rc genhtml_branch_coverage=1 00:06:23.418 --rc genhtml_function_coverage=1 00:06:23.418 --rc genhtml_legend=1 00:06:23.418 --rc geninfo_all_blocks=1 00:06:23.418 --rc geninfo_unexecuted_blocks=1 00:06:23.418 00:06:23.418 ' 00:06:23.418 16:34:37 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.418 --rc genhtml_branch_coverage=1 00:06:23.418 --rc genhtml_function_coverage=1 00:06:23.418 --rc genhtml_legend=1 00:06:23.418 --rc geninfo_all_blocks=1 00:06:23.418 --rc geninfo_unexecuted_blocks=1 00:06:23.418 00:06:23.418 ' 00:06:23.418 16:34:37 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.418 --rc genhtml_branch_coverage=1 00:06:23.418 --rc genhtml_function_coverage=1 00:06:23.418 --rc genhtml_legend=1 00:06:23.418 --rc geninfo_all_blocks=1 00:06:23.418 --rc geninfo_unexecuted_blocks=1 00:06:23.418 00:06:23.418 ' 00:06:23.418 16:34:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:23.418 16:34:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:23.418 16:34:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:23.418 16:34:37 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.418 16:34:37 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.418 16:34:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.418 ************************************ 00:06:23.418 START TEST nvmf_target_core 00:06:23.418 ************************************ 00:06:23.418 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:23.418 * Looking for test storage... 00:06:23.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:23.418 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.418 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.418 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.677 --rc genhtml_branch_coverage=1 00:06:23.677 --rc genhtml_function_coverage=1 00:06:23.677 --rc genhtml_legend=1 00:06:23.677 --rc geninfo_all_blocks=1 00:06:23.677 --rc geninfo_unexecuted_blocks=1 00:06:23.677 00:06:23.677 ' 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.677 --rc genhtml_branch_coverage=1 00:06:23.677 --rc genhtml_function_coverage=1 00:06:23.677 --rc genhtml_legend=1 00:06:23.677 --rc geninfo_all_blocks=1 00:06:23.677 --rc geninfo_unexecuted_blocks=1 00:06:23.677 00:06:23.677 ' 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.677 --rc genhtml_branch_coverage=1 00:06:23.677 --rc genhtml_function_coverage=1 00:06:23.677 --rc genhtml_legend=1 00:06:23.677 --rc geninfo_all_blocks=1 00:06:23.677 --rc geninfo_unexecuted_blocks=1 00:06:23.677 00:06:23.677 ' 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.677 --rc genhtml_branch_coverage=1 00:06:23.677 --rc genhtml_function_coverage=1 00:06:23.677 --rc genhtml_legend=1 00:06:23.677 --rc geninfo_all_blocks=1 00:06:23.677 --rc geninfo_unexecuted_blocks=1 00:06:23.677 00:06:23.677 ' 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.677 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.678 ************************************ 00:06:23.678 START TEST nvmf_abort 00:06:23.678 ************************************ 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:23.678 * Looking for test storage... 00:06:23.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.678 --rc genhtml_branch_coverage=1 00:06:23.678 --rc genhtml_function_coverage=1 00:06:23.678 --rc genhtml_legend=1 00:06:23.678 --rc geninfo_all_blocks=1 00:06:23.678 --rc geninfo_unexecuted_blocks=1 00:06:23.678 00:06:23.678 ' 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.678 --rc genhtml_branch_coverage=1 00:06:23.678 --rc genhtml_function_coverage=1 00:06:23.678 --rc genhtml_legend=1 00:06:23.678 --rc geninfo_all_blocks=1 00:06:23.678 --rc geninfo_unexecuted_blocks=1 00:06:23.678 00:06:23.678 ' 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.678 --rc genhtml_branch_coverage=1 00:06:23.678 --rc genhtml_function_coverage=1 00:06:23.678 --rc genhtml_legend=1 00:06:23.678 --rc geninfo_all_blocks=1 00:06:23.678 --rc geninfo_unexecuted_blocks=1 00:06:23.678 00:06:23.678 ' 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.678 --rc genhtml_branch_coverage=1 00:06:23.678 --rc genhtml_function_coverage=1 00:06:23.678 --rc genhtml_legend=1 00:06:23.678 --rc geninfo_all_blocks=1 00:06:23.678 --rc geninfo_unexecuted_blocks=1 00:06:23.678 00:06:23.678 ' 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.678 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:23.937 16:34:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:25.841 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:25.841 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:25.842 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:25.842 Found net devices under 0000:09:00.0: cvl_0_0 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:25.842 Found net devices under 0000:09:00.1: cvl_0_1 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:25.842 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.100 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.100 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.100 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:26.100 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.100 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.100 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.100 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:26.100 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:26.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:06:26.100 00:06:26.100 --- 10.0.0.2 ping statistics --- 00:06:26.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.100 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:06:26.100 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:06:26.100 00:06:26.100 --- 10.0.0.1 ping statistics --- 00:06:26.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.100 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2239694 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2239694 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2239694 ']' 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.101 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.101 [2024-10-17 16:34:39.706507] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:26.101 [2024-10-17 16:34:39.706596] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.101 [2024-10-17 16:34:39.774178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.359 [2024-10-17 16:34:39.839579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.360 [2024-10-17 16:34:39.839640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.360 [2024-10-17 16:34:39.839657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.360 [2024-10-17 16:34:39.839670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.360 [2024-10-17 16:34:39.839682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.360 [2024-10-17 16:34:39.841266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.360 [2024-10-17 16:34:39.841347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.360 [2024-10-17 16:34:39.841351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.360 [2024-10-17 16:34:39.982490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.360 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.360 Malloc0 00:06:26.360 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.360 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:26.360 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.360 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.360 Delay0 00:06:26.360 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.360 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:26.360 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.360 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.360 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.360 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:26.360 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.360 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.618 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.618 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:26.618 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.618 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.618 [2024-10-17 16:34:40.055706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.618 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.618 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:26.618 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.618 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.618 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.618 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:26.618 [2024-10-17 16:34:40.204082] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:29.146 Initializing NVMe Controllers 00:06:29.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:29.146 controller IO queue size 128 less than required 00:06:29.146 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:29.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:29.146 Initialization complete. Launching workers. 00:06:29.146 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28292 00:06:29.146 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28357, failed to submit 62 00:06:29.146 success 28296, unsuccessful 61, failed 0 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:29.146 rmmod nvme_tcp 00:06:29.146 rmmod nvme_fabrics 00:06:29.146 rmmod nvme_keyring 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2239694 ']' 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2239694 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2239694 ']' 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2239694 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2239694 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2239694' 00:06:29.146 killing process with pid 2239694 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2239694 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2239694 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.146 16:34:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.681 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:31.681 00:06:31.681 real 0m7.589s 00:06:31.681 user 0m11.335s 00:06:31.681 sys 0m2.563s 00:06:31.681 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.681 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.681 ************************************ 00:06:31.681 END TEST nvmf_abort 00:06:31.681 ************************************ 00:06:31.681 16:34:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:31.681 16:34:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:31.681 16:34:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.681 16:34:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:31.681 ************************************ 00:06:31.681 START TEST nvmf_ns_hotplug_stress 00:06:31.681 ************************************ 00:06:31.681 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:31.681 * Looking for test storage... 00:06:31.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.682 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:31.682 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:06:31.682 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:31.682 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:31.682 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.682 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:31.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.682 --rc genhtml_branch_coverage=1 00:06:31.682 --rc genhtml_function_coverage=1 00:06:31.682 --rc genhtml_legend=1 00:06:31.682 --rc geninfo_all_blocks=1 00:06:31.682 --rc geninfo_unexecuted_blocks=1 00:06:31.682 00:06:31.682 ' 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:31.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.682 --rc genhtml_branch_coverage=1 00:06:31.682 --rc genhtml_function_coverage=1 00:06:31.682 --rc genhtml_legend=1 00:06:31.682 --rc geninfo_all_blocks=1 00:06:31.682 --rc geninfo_unexecuted_blocks=1 00:06:31.682 00:06:31.682 ' 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:31.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.682 --rc genhtml_branch_coverage=1 00:06:31.682 --rc genhtml_function_coverage=1 00:06:31.682 --rc genhtml_legend=1 00:06:31.682 --rc geninfo_all_blocks=1 00:06:31.682 --rc geninfo_unexecuted_blocks=1 00:06:31.682 00:06:31.682 ' 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:31.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.682 --rc genhtml_branch_coverage=1 00:06:31.682 --rc genhtml_function_coverage=1 00:06:31.682 --rc genhtml_legend=1 00:06:31.682 --rc geninfo_all_blocks=1 00:06:31.682 --rc geninfo_unexecuted_blocks=1 00:06:31.682 00:06:31.682 ' 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:31.682 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.683 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:31.683 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:31.683 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:31.683 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.683 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.683 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.683 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:31.683 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:31.683 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:31.683 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:33.584 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.584 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:33.585 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:33.585 Found net devices under 0000:09:00.0: cvl_0_0 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:33.585 Found net devices under 0000:09:00.1: cvl_0_1 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:33.585 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:33.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:33.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:06:33.585 00:06:33.585 --- 10.0.0.2 ping statistics --- 00:06:33.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.585 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:33.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:33.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:06:33.585 00:06:33.585 --- 10.0.0.1 ping statistics --- 00:06:33.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.585 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2241935 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2241935 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2241935 ']' 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.585 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.585 [2024-10-17 16:34:47.142246] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:06:33.585 [2024-10-17 16:34:47.142330] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.585 [2024-10-17 16:34:47.209918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.585 [2024-10-17 16:34:47.272539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.585 [2024-10-17 16:34:47.272598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.585 [2024-10-17 16:34:47.272615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.585 [2024-10-17 16:34:47.272629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.585 [2024-10-17 16:34:47.272641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.843 [2024-10-17 16:34:47.274179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.843 [2024-10-17 16:34:47.274206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.843 [2024-10-17 16:34:47.274210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.843 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.843 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:33.843 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:33.843 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:33.843 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.843 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.843 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:33.843 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:34.101 [2024-10-17 16:34:47.648493] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.101 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:34.416 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.682 [2024-10-17 16:34:48.191237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.682 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:34.940 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:35.198 Malloc0 00:06:35.198 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:35.456 Delay0 00:06:35.456 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.714 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:35.971 NULL1 00:06:35.971 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:36.229 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2242374 00:06:36.229 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:36.229 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:36.229 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.602 Read completed with error (sct=0, sc=11) 00:06:37.602 16:34:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.860 16:34:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:37.860 16:34:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:38.118 true 00:06:38.118 16:34:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:38.118 16:34:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.683 16:34:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.941 16:34:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:38.941 16:34:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:39.199 true 00:06:39.199 16:34:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:39.199 16:34:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.764 16:34:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.764 16:34:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:39.764 16:34:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:40.021 true 00:06:40.021 16:34:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:40.021 16:34:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.278 16:34:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.843 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:40.843 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:40.843 true 00:06:40.843 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:40.843 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.775 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.033 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:42.033 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:42.290 true 00:06:42.290 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:42.290 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.548 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.805 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:42.805 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:43.113 true 00:06:43.113 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:43.113 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.412 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.676 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:43.676 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:43.935 true 00:06:43.935 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:43.935 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.868 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.434 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:45.434 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:45.434 true 00:06:45.434 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:45.434 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.691 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.949 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:45.949 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:46.206 true 00:06:46.464 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:46.464 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.721 16:35:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.979 16:35:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:46.979 16:35:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:47.237 true 00:06:47.237 16:35:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:47.237 16:35:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.169 16:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.427 16:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:48.427 16:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:48.684 true 00:06:48.684 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:48.684 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.941 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.199 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:49.199 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:49.457 true 00:06:49.457 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:49.457 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.715 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.975 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:49.975 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:50.235 true 00:06:50.235 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:50.235 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.168 16:35:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.733 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:51.733 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:51.733 true 00:06:51.733 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:51.733 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.991 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.249 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:52.249 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:52.506 true 00:06:52.506 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:52.506 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.072 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.072 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:53.072 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:53.329 true 00:06:53.329 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:53.329 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.702 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.702 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:54.702 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:54.960 true 00:06:54.960 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:54.960 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.218 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.475 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:55.475 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:55.732 true 00:06:55.732 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:55.733 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.988 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.245 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:56.245 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:56.503 true 00:06:56.503 16:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:56.503 16:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.436 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.697 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:57.697 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:57.955 true 00:06:57.955 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:57.955 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.212 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.469 16:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:58.469 16:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:58.727 true 00:06:58.727 16:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:06:58.727 16:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.660 16:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.917 16:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:59.917 16:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:00.175 true 00:07:00.175 16:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:07:00.175 16:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.434 16:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.691 16:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:00.691 16:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:00.947 true 00:07:00.947 16:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:07:00.947 16:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.204 16:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.461 16:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:01.461 16:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:01.717 true 00:07:01.717 16:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:07:01.717 16:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.085 16:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.085 16:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:03.085 16:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:03.342 true 00:07:03.342 16:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:07:03.342 16:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.598 16:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.857 16:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:03.857 16:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:04.115 true 00:07:04.115 16:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:07:04.115 16:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.372 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.635 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:04.635 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:04.893 true 00:07:04.893 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:07:04.893 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.266 16:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.266 16:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:06.266 16:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:06.524 true 00:07:06.524 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:07:06.524 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.524 Initializing NVMe Controllers 00:07:06.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.524 Controller IO queue size 128, less than required. 00:07:06.524 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.524 Controller IO queue size 128, less than required. 00:07:06.524 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:06.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:06.524 Initialization complete. Launching workers. 00:07:06.524 ======================================================== 00:07:06.524 Latency(us) 00:07:06.524 Device Information : IOPS MiB/s Average min max 00:07:06.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 495.66 0.24 105421.12 3044.46 1013033.83 00:07:06.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8186.26 4.00 15637.48 4032.75 448194.24 00:07:06.524 ======================================================== 00:07:06.524 Total : 8681.92 4.24 20763.37 3044.46 1013033.83 00:07:06.524 00:07:06.782 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.039 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:07.039 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:07.298 true 00:07:07.298 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2242374 00:07:07.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2242374) - No such process 00:07:07.298 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2242374 00:07:07.298 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.557 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.815 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:07.815 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:07.815 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:07.816 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.816 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:08.073 null0 00:07:08.073 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.073 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.073 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:08.331 null1 00:07:08.331 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.331 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.331 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:08.589 null2 00:07:08.589 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.589 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.589 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:08.846 null3 00:07:08.846 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.846 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.846 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:09.104 null4 00:07:09.104 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.104 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.104 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:09.362 null5 00:07:09.362 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.362 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.362 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:09.620 null6 00:07:09.878 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.878 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.878 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:10.136 null7 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:10.136 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2246450 2246451 2246452 2246455 2246457 2246459 2246461 2246463 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.137 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.395 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.395 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.395 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.395 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.395 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.396 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.396 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.396 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.654 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.912 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.912 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.912 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.912 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.913 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.913 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.913 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.913 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.171 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.171 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.171 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.171 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.171 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.171 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.171 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.172 16:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.430 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.430 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.430 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.430 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.430 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.430 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.430 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.430 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.998 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.256 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.256 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.256 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.256 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.256 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.256 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.256 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.256 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.515 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.776 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.776 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.776 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.776 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.776 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.776 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.776 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.776 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.035 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.293 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.293 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.293 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.293 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.293 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.293 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.293 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.293 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.551 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.551 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.551 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.551 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.551 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.551 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.809 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.068 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.068 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.068 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.068 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.068 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.068 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.068 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.068 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.327 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.586 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.586 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.586 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.586 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.586 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.586 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.586 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.586 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.104 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.104 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.104 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.104 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.104 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.104 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.104 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.104 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.363 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.363 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.363 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.363 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.363 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.363 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.363 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.363 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.363 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.622 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.881 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.881 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.881 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.881 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.881 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.881 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.881 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.881 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.140 rmmod nvme_tcp 00:07:16.140 rmmod nvme_fabrics 00:07:16.140 rmmod nvme_keyring 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2241935 ']' 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2241935 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2241935 ']' 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2241935 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2241935 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2241935' 00:07:16.140 killing process with pid 2241935 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2241935 00:07:16.140 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2241935 00:07:16.400 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:16.400 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:16.400 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:16.400 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:16.400 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:07:16.400 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:16.400 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:07:16.400 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.400 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:16.400 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.400 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.400 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:18.946 00:07:18.946 real 0m47.192s 00:07:18.946 user 3m39.593s 00:07:18.946 sys 0m15.964s 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.946 ************************************ 00:07:18.946 END TEST nvmf_ns_hotplug_stress 00:07:18.946 ************************************ 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.946 ************************************ 00:07:18.946 START TEST nvmf_delete_subsystem 00:07:18.946 ************************************ 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:18.946 * Looking for test storage... 00:07:18.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.946 --rc genhtml_branch_coverage=1 00:07:18.946 --rc genhtml_function_coverage=1 00:07:18.946 --rc genhtml_legend=1 00:07:18.946 --rc geninfo_all_blocks=1 00:07:18.946 --rc geninfo_unexecuted_blocks=1 00:07:18.946 00:07:18.946 ' 00:07:18.946 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.946 --rc genhtml_branch_coverage=1 00:07:18.946 --rc genhtml_function_coverage=1 00:07:18.946 --rc genhtml_legend=1 00:07:18.946 --rc geninfo_all_blocks=1 00:07:18.946 --rc geninfo_unexecuted_blocks=1 00:07:18.946 00:07:18.946 ' 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.947 --rc genhtml_branch_coverage=1 00:07:18.947 --rc genhtml_function_coverage=1 00:07:18.947 --rc genhtml_legend=1 00:07:18.947 --rc geninfo_all_blocks=1 00:07:18.947 --rc geninfo_unexecuted_blocks=1 00:07:18.947 00:07:18.947 ' 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.947 --rc genhtml_branch_coverage=1 00:07:18.947 --rc genhtml_function_coverage=1 00:07:18.947 --rc genhtml_legend=1 00:07:18.947 --rc geninfo_all_blocks=1 00:07:18.947 --rc geninfo_unexecuted_blocks=1 00:07:18.947 00:07:18.947 ' 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.947 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.854 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.854 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:20.854 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:20.854 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:20.854 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:20.854 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:20.854 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:20.854 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:20.854 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:20.854 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:20.854 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:20.854 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:20.855 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:20.855 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:20.855 Found net devices under 0000:09:00.0: cvl_0_0 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:20.855 Found net devices under 0000:09:00.1: cvl_0_1 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:20.855 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:20.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:07:20.855 00:07:20.855 --- 10.0.0.2 ping statistics --- 00:07:20.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.856 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:07:20.856 00:07:20.856 --- 10.0.0.1 ping statistics --- 00:07:20.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.856 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2249349 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2249349 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2249349 ']' 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.856 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.115 [2024-10-17 16:35:34.550571] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:07:21.115 [2024-10-17 16:35:34.550664] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.115 [2024-10-17 16:35:34.615522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.115 [2024-10-17 16:35:34.673551] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.115 [2024-10-17 16:35:34.673610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.115 [2024-10-17 16:35:34.673638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.115 [2024-10-17 16:35:34.673650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.115 [2024-10-17 16:35:34.673659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.115 [2024-10-17 16:35:34.675186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.115 [2024-10-17 16:35:34.675192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.115 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.115 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:21.115 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:21.115 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:21.115 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.373 [2024-10-17 16:35:34.812214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.373 [2024-10-17 16:35:34.828465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.373 NULL1 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.373 Delay0 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2249380 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:21.373 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:21.373 [2024-10-17 16:35:34.903197] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:23.275 16:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.275 16:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.275 16:35:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 [2024-10-17 16:35:37.154936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbb84000c00 is same with the state(6) to be set 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Write completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 starting I/O failed: -6 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.535 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 starting I/O failed: -6 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 starting I/O failed: -6 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 starting I/O failed: -6 00:07:23.536 [2024-10-17 16:35:37.155986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2404570 is same with the state(6) to be set 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Write completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 Read completed with error (sct=0, sc=8) 00:07:23.536 [2024-10-17 16:35:37.156525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbb8400d490 is same with the state(6) to be set 00:07:24.472 [2024-10-17 16:35:38.123411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2405a70 is same with the state(6) to be set 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 [2024-10-17 16:35:38.156343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2404390 is same with the state(6) to be set 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 [2024-10-17 16:35:38.156565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2404750 is same with the state(6) to be set 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.472 Write completed with error (sct=0, sc=8) 00:07:24.472 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Write completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 [2024-10-17 16:35:38.158720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbb8400cfe0 is same with the state(6) to be set 00:07:24.473 Write completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Write completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Write completed with error (sct=0, sc=8) 00:07:24.473 Write completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Write completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Write completed with error (sct=0, sc=8) 00:07:24.473 Write completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Write completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Write completed with error (sct=0, sc=8) 00:07:24.473 Read completed with error (sct=0, sc=8) 00:07:24.473 Write completed with error (sct=0, sc=8) 00:07:24.473 [2024-10-17 16:35:38.159007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbb8400d7c0 is same with the state(6) to be set 00:07:24.473 Initializing NVMe Controllers 00:07:24.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:24.473 Controller IO queue size 128, less than required. 00:07:24.473 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:24.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:24.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:24.473 Initialization complete. Launching workers. 00:07:24.473 ======================================================== 00:07:24.473 Latency(us) 00:07:24.473 Device Information : IOPS MiB/s Average min max 00:07:24.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 160.27 0.08 1002466.82 568.03 2003130.60 00:07:24.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.79 0.08 1009504.92 1596.53 2005566.90 00:07:24.473 ======================================================== 00:07:24.473 Total : 318.07 0.16 1005958.42 568.03 2005566.90 00:07:24.473 00:07:24.473 [2024-10-17 16:35:38.159707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2405a70 (9): Bad file descriptor 00:07:24.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:24.731 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.731 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:24.731 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2249380 00:07:24.732 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2249380 00:07:24.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2249380) - No such process 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2249380 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2249380 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2249380 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.991 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.250 [2024-10-17 16:35:38.683326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.250 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.250 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.250 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.250 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.250 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.250 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2249788 00:07:25.250 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:25.250 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:25.250 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2249788 00:07:25.250 16:35:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.250 [2024-10-17 16:35:38.745902] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:25.816 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.816 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2249788 00:07:25.816 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.074 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.074 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2249788 00:07:26.074 16:35:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.641 16:35:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.641 16:35:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2249788 00:07:26.641 16:35:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.208 16:35:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.208 16:35:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2249788 00:07:27.208 16:35:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.775 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.775 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2249788 00:07:27.775 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.034 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.034 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2249788 00:07:28.034 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.300 Initializing NVMe Controllers 00:07:28.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:28.300 Controller IO queue size 128, less than required. 00:07:28.300 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:28.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:28.300 Initialization complete. Launching workers. 00:07:28.300 ======================================================== 00:07:28.300 Latency(us) 00:07:28.300 Device Information : IOPS MiB/s Average min max 00:07:28.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006222.38 1000212.99 1042580.24 00:07:28.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004941.66 1000191.36 1014204.55 00:07:28.300 ======================================================== 00:07:28.300 Total : 256.00 0.12 1005582.02 1000191.36 1042580.24 00:07:28.300 00:07:28.578 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.578 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2249788 00:07:28.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2249788) - No such process 00:07:28.578 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2249788 00:07:28.578 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:28.578 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:28.578 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:28.578 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:28.578 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.578 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:28.578 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.578 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.578 rmmod nvme_tcp 00:07:28.578 rmmod nvme_fabrics 00:07:28.882 rmmod nvme_keyring 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2249349 ']' 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2249349 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2249349 ']' 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2249349 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2249349 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2249349' 00:07:28.882 killing process with pid 2249349 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2249349 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2249349 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:28.882 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:07:29.165 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:29.166 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:29.166 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.166 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.166 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.072 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:31.072 00:07:31.072 real 0m12.487s 00:07:31.072 user 0m28.198s 00:07:31.072 sys 0m2.989s 00:07:31.072 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.072 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.072 ************************************ 00:07:31.072 END TEST nvmf_delete_subsystem 00:07:31.072 ************************************ 00:07:31.072 16:35:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:31.072 16:35:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:31.073 16:35:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.073 16:35:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.073 ************************************ 00:07:31.073 START TEST nvmf_host_management 00:07:31.073 ************************************ 00:07:31.073 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:31.073 * Looking for test storage... 00:07:31.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.073 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:31.073 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:31.073 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:31.332 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:31.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.333 --rc genhtml_branch_coverage=1 00:07:31.333 --rc genhtml_function_coverage=1 00:07:31.333 --rc genhtml_legend=1 00:07:31.333 --rc geninfo_all_blocks=1 00:07:31.333 --rc geninfo_unexecuted_blocks=1 00:07:31.333 00:07:31.333 ' 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:31.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.333 --rc genhtml_branch_coverage=1 00:07:31.333 --rc genhtml_function_coverage=1 00:07:31.333 --rc genhtml_legend=1 00:07:31.333 --rc geninfo_all_blocks=1 00:07:31.333 --rc geninfo_unexecuted_blocks=1 00:07:31.333 00:07:31.333 ' 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:31.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.333 --rc genhtml_branch_coverage=1 00:07:31.333 --rc genhtml_function_coverage=1 00:07:31.333 --rc genhtml_legend=1 00:07:31.333 --rc geninfo_all_blocks=1 00:07:31.333 --rc geninfo_unexecuted_blocks=1 00:07:31.333 00:07:31.333 ' 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:31.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.333 --rc genhtml_branch_coverage=1 00:07:31.333 --rc genhtml_function_coverage=1 00:07:31.333 --rc genhtml_legend=1 00:07:31.333 --rc geninfo_all_blocks=1 00:07:31.333 --rc geninfo_unexecuted_blocks=1 00:07:31.333 00:07:31.333 ' 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:31.333 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.238 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.238 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:33.238 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:33.238 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:33.238 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:33.238 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:33.238 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:33.238 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:33.238 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:33.238 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:33.238 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:33.238 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:33.239 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:33.239 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:33.239 Found net devices under 0000:09:00.0: cvl_0_0 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:33.239 Found net devices under 0000:09:00.1: cvl_0_1 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:33.239 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.499 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.499 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.499 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:33.499 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:33.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:07:33.499 00:07:33.499 --- 10.0.0.2 ping statistics --- 00:07:33.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.499 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:07:33.500 00:07:33.500 --- 10.0.0.1 ping statistics --- 00:07:33.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.500 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2252262 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2252262 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2252262 ']' 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.500 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.500 [2024-10-17 16:35:47.049604] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:07:33.500 [2024-10-17 16:35:47.049701] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.500 [2024-10-17 16:35:47.113235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.500 [2024-10-17 16:35:47.176082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.500 [2024-10-17 16:35:47.176136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.500 [2024-10-17 16:35:47.176164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.500 [2024-10-17 16:35:47.176175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.500 [2024-10-17 16:35:47.176185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.500 [2024-10-17 16:35:47.177772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.500 [2024-10-17 16:35:47.177834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.500 [2024-10-17 16:35:47.177869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:33.500 [2024-10-17 16:35:47.177871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.761 [2024-10-17 16:35:47.359777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.761 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:33.762 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:33.762 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:33.762 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.762 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.762 Malloc0 00:07:33.762 [2024-10-17 16:35:47.430677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.762 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.762 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:33.762 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.762 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.020 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2252319 00:07:34.020 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2252319 /var/tmp/bdevperf.sock 00:07:34.020 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2252319 ']' 00:07:34.020 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:34.020 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:34.020 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:34.020 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.021 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:34.021 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:34.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:34.021 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:34.021 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.021 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:34.021 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.021 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:34.021 { 00:07:34.021 "params": { 00:07:34.021 "name": "Nvme$subsystem", 00:07:34.021 "trtype": "$TEST_TRANSPORT", 00:07:34.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:34.021 "adrfam": "ipv4", 00:07:34.021 "trsvcid": "$NVMF_PORT", 00:07:34.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:34.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:34.021 "hdgst": ${hdgst:-false}, 00:07:34.021 "ddgst": ${ddgst:-false} 00:07:34.021 }, 00:07:34.021 "method": "bdev_nvme_attach_controller" 00:07:34.021 } 00:07:34.021 EOF 00:07:34.021 )") 00:07:34.021 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:34.021 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:34.021 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:34.021 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:34.021 "params": { 00:07:34.021 "name": "Nvme0", 00:07:34.021 "trtype": "tcp", 00:07:34.021 "traddr": "10.0.0.2", 00:07:34.021 "adrfam": "ipv4", 00:07:34.021 "trsvcid": "4420", 00:07:34.021 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:34.021 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:34.021 "hdgst": false, 00:07:34.021 "ddgst": false 00:07:34.021 }, 00:07:34.021 "method": "bdev_nvme_attach_controller" 00:07:34.021 }' 00:07:34.021 [2024-10-17 16:35:47.506949] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:07:34.021 [2024-10-17 16:35:47.507069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252319 ] 00:07:34.021 [2024-10-17 16:35:47.566487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.021 [2024-10-17 16:35:47.626088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.280 Running I/O for 10 seconds... 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:34.539 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.799 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.799 [2024-10-17 16:35:48.366140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.366965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.366980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.367010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.367029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.367043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.367060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.367075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.367090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.799 [2024-10-17 16:35:48.367103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.799 [2024-10-17 16:35:48.367119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.367973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.367986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.368012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.368033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.368050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.368064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.368079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.368093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.368109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:34.800 [2024-10-17 16:35:48.368122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.368206] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2545a10 was disconnected and freed. reset controller. 00:07:34.800 [2024-10-17 16:35:48.369421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:34.800 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.800 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:34.800 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.800 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:34.800 task offset: 77440 on job bdev=Nvme0n1 fails 00:07:34.800 00:07:34.800 Latency(us) 00:07:34.800 [2024-10-17T14:35:48.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.800 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:34.800 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:34.800 Verification LBA range: start 0x0 length 0x400 00:07:34.800 Nvme0n1 : 0.40 1429.18 89.32 158.80 0.00 39173.01 2669.99 38641.97 00:07:34.800 [2024-10-17T14:35:48.490Z] =================================================================================================================== 00:07:34.800 [2024-10-17T14:35:48.490Z] Total : 1429.18 89.32 158.80 0.00 39173.01 2669.99 38641.97 00:07:34.800 [2024-10-17 16:35:48.371373] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.800 [2024-10-17 16:35:48.371403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232cb00 (9): Bad file descriptor 00:07:34.800 [2024-10-17 16:35:48.373570] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:34.800 [2024-10-17 16:35:48.373766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:34.800 [2024-10-17 16:35:48.373795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:34.800 [2024-10-17 16:35:48.373821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:34.801 [2024-10-17 16:35:48.373836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:34.801 [2024-10-17 16:35:48.373854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:34.801 [2024-10-17 16:35:48.373868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x232cb00 00:07:34.801 [2024-10-17 16:35:48.373909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232cb00 (9): Bad file descriptor 00:07:34.801 [2024-10-17 16:35:48.373935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:34.801 [2024-10-17 16:35:48.373949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:34.801 [2024-10-17 16:35:48.373965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:34.801 [2024-10-17 16:35:48.373985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.801 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.801 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2252319 00:07:35.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2252319) - No such process 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:35.739 { 00:07:35.739 "params": { 00:07:35.739 "name": "Nvme$subsystem", 00:07:35.739 "trtype": "$TEST_TRANSPORT", 00:07:35.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.739 "adrfam": "ipv4", 00:07:35.739 "trsvcid": "$NVMF_PORT", 00:07:35.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.739 "hdgst": ${hdgst:-false}, 00:07:35.739 "ddgst": ${ddgst:-false} 00:07:35.739 }, 00:07:35.739 "method": "bdev_nvme_attach_controller" 00:07:35.739 } 00:07:35.739 EOF 00:07:35.739 )") 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:35.739 16:35:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:35.739 "params": { 00:07:35.739 "name": "Nvme0", 00:07:35.739 "trtype": "tcp", 00:07:35.739 "traddr": "10.0.0.2", 00:07:35.739 "adrfam": "ipv4", 00:07:35.739 "trsvcid": "4420", 00:07:35.739 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:35.739 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:35.739 "hdgst": false, 00:07:35.739 "ddgst": false 00:07:35.739 }, 00:07:35.739 "method": "bdev_nvme_attach_controller" 00:07:35.739 }' 00:07:35.997 [2024-10-17 16:35:49.430644] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:07:35.997 [2024-10-17 16:35:49.430717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252592 ] 00:07:35.997 [2024-10-17 16:35:49.489416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.997 [2024-10-17 16:35:49.547994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.258 Running I/O for 1 seconds... 00:07:37.640 1664.00 IOPS, 104.00 MiB/s 00:07:37.640 Latency(us) 00:07:37.640 [2024-10-17T14:35:51.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.640 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:37.640 Verification LBA range: start 0x0 length 0x400 00:07:37.640 Nvme0n1 : 1.02 1686.10 105.38 0.00 0.00 37341.93 6043.88 33399.09 00:07:37.640 [2024-10-17T14:35:51.330Z] =================================================================================================================== 00:07:37.640 [2024-10-17T14:35:51.330Z] Total : 1686.10 105.38 0.00 0.00 37341.93 6043.88 33399.09 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:37.640 rmmod nvme_tcp 00:07:37.640 rmmod nvme_fabrics 00:07:37.640 rmmod nvme_keyring 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2252262 ']' 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2252262 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2252262 ']' 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2252262 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2252262 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2252262' 00:07:37.640 killing process with pid 2252262 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2252262 00:07:37.640 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2252262 00:07:37.908 [2024-10-17 16:35:51.467876] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:37.908 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:37.908 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:37.908 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:37.908 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:37.908 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:37.908 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:37.908 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:37.908 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:37.908 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:37.908 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.908 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.908 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:40.450 00:07:40.450 real 0m8.907s 00:07:40.450 user 0m20.504s 00:07:40.450 sys 0m2.653s 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.450 ************************************ 00:07:40.450 END TEST nvmf_host_management 00:07:40.450 ************************************ 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.450 ************************************ 00:07:40.450 START TEST nvmf_lvol 00:07:40.450 ************************************ 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:40.450 * Looking for test storage... 00:07:40.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:40.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.450 --rc genhtml_branch_coverage=1 00:07:40.450 --rc genhtml_function_coverage=1 00:07:40.450 --rc genhtml_legend=1 00:07:40.450 --rc geninfo_all_blocks=1 00:07:40.450 --rc geninfo_unexecuted_blocks=1 00:07:40.450 00:07:40.450 ' 00:07:40.450 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:40.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.450 --rc genhtml_branch_coverage=1 00:07:40.451 --rc genhtml_function_coverage=1 00:07:40.451 --rc genhtml_legend=1 00:07:40.451 --rc geninfo_all_blocks=1 00:07:40.451 --rc geninfo_unexecuted_blocks=1 00:07:40.451 00:07:40.451 ' 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:40.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.451 --rc genhtml_branch_coverage=1 00:07:40.451 --rc genhtml_function_coverage=1 00:07:40.451 --rc genhtml_legend=1 00:07:40.451 --rc geninfo_all_blocks=1 00:07:40.451 --rc geninfo_unexecuted_blocks=1 00:07:40.451 00:07:40.451 ' 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:40.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.451 --rc genhtml_branch_coverage=1 00:07:40.451 --rc genhtml_function_coverage=1 00:07:40.451 --rc genhtml_legend=1 00:07:40.451 --rc geninfo_all_blocks=1 00:07:40.451 --rc geninfo_unexecuted_blocks=1 00:07:40.451 00:07:40.451 ' 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.451 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:42.359 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:42.359 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:42.359 Found net devices under 0000:09:00.0: cvl_0_0 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:42.359 Found net devices under 0000:09:00.1: cvl_0_1 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.359 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.360 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:42.360 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.360 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.360 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.360 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:42.360 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:42.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:07:42.360 00:07:42.360 --- 10.0.0.2 ping statistics --- 00:07:42.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.360 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:42.360 16:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:07:42.360 00:07:42.360 --- 10.0.0.1 ping statistics --- 00:07:42.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.360 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2254760 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2254760 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2254760 ']' 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.360 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.620 [2024-10-17 16:35:56.085846] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:07:42.621 [2024-10-17 16:35:56.085928] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.621 [2024-10-17 16:35:56.160420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.621 [2024-10-17 16:35:56.222450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.621 [2024-10-17 16:35:56.222512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.621 [2024-10-17 16:35:56.222536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.621 [2024-10-17 16:35:56.222549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.621 [2024-10-17 16:35:56.222561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.621 [2024-10-17 16:35:56.224102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.621 [2024-10-17 16:35:56.224176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.621 [2024-10-17 16:35:56.224157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.880 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.880 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:42.880 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:42.880 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.880 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.880 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.880 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:43.140 [2024-10-17 16:35:56.615414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.140 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:43.398 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:43.398 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:43.656 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:43.656 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:43.920 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:44.180 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1c5f9348-66b8-4843-8ff2-5420c0e4240b 00:07:44.180 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1c5f9348-66b8-4843-8ff2-5420c0e4240b lvol 20 00:07:44.438 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=13c05529-70c8-4b66-9d08-d119206eb7c5 00:07:44.438 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:44.696 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 13c05529-70c8-4b66-9d08-d119206eb7c5 00:07:44.954 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:45.213 [2024-10-17 16:35:58.857534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.213 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:45.472 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2255119 00:07:45.472 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:45.472 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:46.851 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 13c05529-70c8-4b66-9d08-d119206eb7c5 MY_SNAPSHOT 00:07:46.851 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=240d9bee-f1bc-4ada-8e02-74daae1c140a 00:07:46.851 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 13c05529-70c8-4b66-9d08-d119206eb7c5 30 00:07:47.422 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 240d9bee-f1bc-4ada-8e02-74daae1c140a MY_CLONE 00:07:47.680 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a2feebd8-3344-46e0-bd33-089b7963d890 00:07:47.680 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a2feebd8-3344-46e0-bd33-089b7963d890 00:07:48.249 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2255119 00:07:56.369 Initializing NVMe Controllers 00:07:56.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:56.369 Controller IO queue size 128, less than required. 00:07:56.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:56.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:56.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:56.369 Initialization complete. Launching workers. 00:07:56.369 ======================================================== 00:07:56.369 Latency(us) 00:07:56.369 Device Information : IOPS MiB/s Average min max 00:07:56.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10442.29 40.79 12267.92 2709.25 97297.82 00:07:56.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10340.90 40.39 12383.54 2195.65 70103.16 00:07:56.369 ======================================================== 00:07:56.369 Total : 20783.19 81.18 12325.45 2195.65 97297.82 00:07:56.369 00:07:56.369 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:56.369 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 13c05529-70c8-4b66-9d08-d119206eb7c5 00:07:56.369 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1c5f9348-66b8-4843-8ff2-5420c0e4240b 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.938 rmmod nvme_tcp 00:07:56.938 rmmod nvme_fabrics 00:07:56.938 rmmod nvme_keyring 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2254760 ']' 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2254760 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2254760 ']' 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2254760 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2254760 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2254760' 00:07:56.938 killing process with pid 2254760 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2254760 00:07:56.938 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2254760 00:07:57.199 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:57.199 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:57.199 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:57.199 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:57.199 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:57.199 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:57.199 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:57.199 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.199 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:57.199 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.199 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.199 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.111 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:59.111 00:07:59.111 real 0m19.138s 00:07:59.111 user 1m4.743s 00:07:59.111 sys 0m5.741s 00:07:59.111 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.111 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.111 ************************************ 00:07:59.111 END TEST nvmf_lvol 00:07:59.111 ************************************ 00:07:59.111 16:36:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:59.111 16:36:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:59.111 16:36:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.111 16:36:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.111 ************************************ 00:07:59.111 START TEST nvmf_lvs_grow 00:07:59.111 ************************************ 00:07:59.111 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:59.371 * Looking for test storage... 00:07:59.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:59.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.371 --rc genhtml_branch_coverage=1 00:07:59.371 --rc genhtml_function_coverage=1 00:07:59.371 --rc genhtml_legend=1 00:07:59.371 --rc geninfo_all_blocks=1 00:07:59.371 --rc geninfo_unexecuted_blocks=1 00:07:59.371 00:07:59.371 ' 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:59.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.371 --rc genhtml_branch_coverage=1 00:07:59.371 --rc genhtml_function_coverage=1 00:07:59.371 --rc genhtml_legend=1 00:07:59.371 --rc geninfo_all_blocks=1 00:07:59.371 --rc geninfo_unexecuted_blocks=1 00:07:59.371 00:07:59.371 ' 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:59.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.371 --rc genhtml_branch_coverage=1 00:07:59.371 --rc genhtml_function_coverage=1 00:07:59.371 --rc genhtml_legend=1 00:07:59.371 --rc geninfo_all_blocks=1 00:07:59.371 --rc geninfo_unexecuted_blocks=1 00:07:59.371 00:07:59.371 ' 00:07:59.371 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:59.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.371 --rc genhtml_branch_coverage=1 00:07:59.371 --rc genhtml_function_coverage=1 00:07:59.371 --rc genhtml_legend=1 00:07:59.371 --rc geninfo_all_blocks=1 00:07:59.371 --rc geninfo_unexecuted_blocks=1 00:07:59.371 00:07:59.371 ' 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.372 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:01.337 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:01.337 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:01.337 Found net devices under 0000:09:00.0: cvl_0_0 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:01.337 Found net devices under 0000:09:00.1: cvl_0_1 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.337 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:01.338 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.338 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.338 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:01.338 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:01.338 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.338 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.338 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:01.338 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:01.338 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.338 16:36:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:01.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:08:01.597 00:08:01.597 --- 10.0.0.2 ping statistics --- 00:08:01.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.597 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:08:01.597 00:08:01.597 --- 10.0.0.1 ping statistics --- 00:08:01.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.597 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2259022 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2259022 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2259022 ']' 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.597 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:01.597 [2024-10-17 16:36:15.172823] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:08:01.597 [2024-10-17 16:36:15.172901] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.597 [2024-10-17 16:36:15.243195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.856 [2024-10-17 16:36:15.300415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.856 [2024-10-17 16:36:15.300468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.856 [2024-10-17 16:36:15.300481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.856 [2024-10-17 16:36:15.300492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.856 [2024-10-17 16:36:15.300501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.856 [2024-10-17 16:36:15.301073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.856 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.856 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:01.856 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:01.856 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:01.856 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:01.856 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.856 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:02.116 [2024-10-17 16:36:15.689458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:02.116 ************************************ 00:08:02.116 START TEST lvs_grow_clean 00:08:02.116 ************************************ 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.116 16:36:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.684 16:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:02.684 16:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:02.943 16:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e6ce989b-46c9-47d5-9468-962c4b7ab3f9 00:08:02.943 16:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6ce989b-46c9-47d5-9468-962c4b7ab3f9 00:08:02.943 16:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:03.203 16:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:03.203 16:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:03.203 16:36:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e6ce989b-46c9-47d5-9468-962c4b7ab3f9 lvol 150 00:08:03.464 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d7fd3715-3884-410d-9d5e-17b7944c684f 00:08:03.464 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.464 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:03.724 [2024-10-17 16:36:17.301909] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:03.724 [2024-10-17 16:36:17.301999] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:03.724 true 00:08:03.724 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6ce989b-46c9-47d5-9468-962c4b7ab3f9 00:08:03.724 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:03.983 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:03.983 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:04.243 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d7fd3715-3884-410d-9d5e-17b7944c684f 00:08:04.503 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:04.762 [2024-10-17 16:36:18.425360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.762 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.330 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2259588 00:08:05.330 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:05.330 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:05.330 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2259588 /var/tmp/bdevperf.sock 00:08:05.330 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2259588 ']' 00:08:05.330 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:05.330 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.330 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:05.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:05.330 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.330 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:05.330 [2024-10-17 16:36:18.761717] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:08:05.330 [2024-10-17 16:36:18.761795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2259588 ] 00:08:05.330 [2024-10-17 16:36:18.819143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.330 [2024-10-17 16:36:18.879203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.330 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.330 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:05.330 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:05.900 Nvme0n1 00:08:05.900 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:06.159 [ 00:08:06.159 { 00:08:06.159 "name": "Nvme0n1", 00:08:06.159 "aliases": [ 00:08:06.159 "d7fd3715-3884-410d-9d5e-17b7944c684f" 00:08:06.159 ], 00:08:06.159 "product_name": "NVMe disk", 00:08:06.159 "block_size": 4096, 00:08:06.159 "num_blocks": 38912, 00:08:06.159 "uuid": "d7fd3715-3884-410d-9d5e-17b7944c684f", 00:08:06.159 "numa_id": 0, 00:08:06.159 "assigned_rate_limits": { 00:08:06.159 "rw_ios_per_sec": 0, 00:08:06.159 "rw_mbytes_per_sec": 0, 00:08:06.159 "r_mbytes_per_sec": 0, 00:08:06.159 "w_mbytes_per_sec": 0 00:08:06.159 }, 00:08:06.159 "claimed": false, 00:08:06.159 "zoned": false, 00:08:06.159 "supported_io_types": { 00:08:06.159 "read": true, 00:08:06.159 "write": true, 00:08:06.159 "unmap": true, 00:08:06.159 "flush": true, 00:08:06.159 "reset": true, 00:08:06.159 "nvme_admin": true, 00:08:06.159 "nvme_io": true, 00:08:06.159 "nvme_io_md": false, 00:08:06.159 "write_zeroes": true, 00:08:06.159 "zcopy": false, 00:08:06.159 "get_zone_info": false, 00:08:06.159 "zone_management": false, 00:08:06.159 "zone_append": false, 00:08:06.159 "compare": true, 00:08:06.159 "compare_and_write": true, 00:08:06.159 "abort": true, 00:08:06.159 "seek_hole": false, 00:08:06.159 "seek_data": false, 00:08:06.159 "copy": true, 00:08:06.159 "nvme_iov_md": false 00:08:06.159 }, 00:08:06.159 "memory_domains": [ 00:08:06.159 { 00:08:06.159 "dma_device_id": "system", 00:08:06.159 "dma_device_type": 1 00:08:06.159 } 00:08:06.159 ], 00:08:06.159 "driver_specific": { 00:08:06.159 "nvme": [ 00:08:06.159 { 00:08:06.159 "trid": { 00:08:06.159 "trtype": "TCP", 00:08:06.159 "adrfam": "IPv4", 00:08:06.159 "traddr": "10.0.0.2", 00:08:06.159 "trsvcid": "4420", 00:08:06.159 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:06.159 }, 00:08:06.159 "ctrlr_data": { 00:08:06.159 "cntlid": 1, 00:08:06.159 "vendor_id": "0x8086", 00:08:06.159 "model_number": "SPDK bdev Controller", 00:08:06.159 "serial_number": "SPDK0", 00:08:06.159 "firmware_revision": "25.01", 00:08:06.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:06.159 "oacs": { 00:08:06.159 "security": 0, 00:08:06.159 "format": 0, 00:08:06.159 "firmware": 0, 00:08:06.159 "ns_manage": 0 00:08:06.159 }, 00:08:06.159 "multi_ctrlr": true, 00:08:06.159 "ana_reporting": false 00:08:06.159 }, 00:08:06.159 "vs": { 00:08:06.159 "nvme_version": "1.3" 00:08:06.159 }, 00:08:06.159 "ns_data": { 00:08:06.159 "id": 1, 00:08:06.159 "can_share": true 00:08:06.159 } 00:08:06.159 } 00:08:06.159 ], 00:08:06.159 "mp_policy": "active_passive" 00:08:06.159 } 00:08:06.159 } 00:08:06.159 ] 00:08:06.159 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2259600 00:08:06.159 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:06.159 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:06.159 Running I/O for 10 seconds... 00:08:07.101 Latency(us) 00:08:07.101 [2024-10-17T14:36:20.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.101 Nvme0n1 : 1.00 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:08:07.101 [2024-10-17T14:36:20.791Z] =================================================================================================================== 00:08:07.101 [2024-10-17T14:36:20.791Z] Total : 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:08:07.101 00:08:08.041 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e6ce989b-46c9-47d5-9468-962c4b7ab3f9 00:08:08.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.041 Nvme0n1 : 2.00 14034.00 54.82 0.00 0.00 0.00 0.00 0.00 00:08:08.041 [2024-10-17T14:36:21.731Z] =================================================================================================================== 00:08:08.041 [2024-10-17T14:36:21.731Z] Total : 14034.00 54.82 0.00 0.00 0.00 0.00 0.00 00:08:08.041 00:08:08.299 true 00:08:08.299 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6ce989b-46c9-47d5-9468-962c4b7ab3f9 00:08:08.299 16:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:08.559 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:08.559 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:08.559 16:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2259600 00:08:09.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.132 Nvme0n1 : 3.00 14139.67 55.23 0.00 0.00 0.00 0.00 0.00 00:08:09.132 [2024-10-17T14:36:22.822Z] =================================================================================================================== 00:08:09.132 [2024-10-17T14:36:22.822Z] Total : 14139.67 55.23 0.00 0.00 0.00 0.00 0.00 00:08:09.132 00:08:10.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.070 Nvme0n1 : 4.00 14256.00 55.69 0.00 0.00 0.00 0.00 0.00 00:08:10.070 [2024-10-17T14:36:23.760Z] =================================================================================================================== 00:08:10.070 [2024-10-17T14:36:23.760Z] Total : 14256.00 55.69 0.00 0.00 0.00 0.00 0.00 00:08:10.070 00:08:11.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.451 Nvme0n1 : 5.00 14300.40 55.86 0.00 0.00 0.00 0.00 0.00 00:08:11.451 [2024-10-17T14:36:25.141Z] =================================================================================================================== 00:08:11.451 [2024-10-17T14:36:25.141Z] Total : 14300.40 55.86 0.00 0.00 0.00 0.00 0.00 00:08:11.451 00:08:12.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.391 Nvme0n1 : 6.00 14351.17 56.06 0.00 0.00 0.00 0.00 0.00 00:08:12.391 [2024-10-17T14:36:26.081Z] =================================================================================================================== 00:08:12.391 [2024-10-17T14:36:26.081Z] Total : 14351.17 56.06 0.00 0.00 0.00 0.00 0.00 00:08:12.391 00:08:13.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.332 Nvme0n1 : 7.00 14405.57 56.27 0.00 0.00 0.00 0.00 0.00 00:08:13.332 [2024-10-17T14:36:27.022Z] =================================================================================================================== 00:08:13.332 [2024-10-17T14:36:27.022Z] Total : 14405.57 56.27 0.00 0.00 0.00 0.00 0.00 00:08:13.332 00:08:14.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.272 Nvme0n1 : 8.00 14462.25 56.49 0.00 0.00 0.00 0.00 0.00 00:08:14.272 [2024-10-17T14:36:27.962Z] =================================================================================================================== 00:08:14.272 [2024-10-17T14:36:27.962Z] Total : 14462.25 56.49 0.00 0.00 0.00 0.00 0.00 00:08:14.272 00:08:15.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.214 Nvme0n1 : 9.00 14492.22 56.61 0.00 0.00 0.00 0.00 0.00 00:08:15.214 [2024-10-17T14:36:28.904Z] =================================================================================================================== 00:08:15.214 [2024-10-17T14:36:28.904Z] Total : 14492.22 56.61 0.00 0.00 0.00 0.00 0.00 00:08:15.214 00:08:16.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.155 Nvme0n1 : 10.00 14503.50 56.65 0.00 0.00 0.00 0.00 0.00 00:08:16.155 [2024-10-17T14:36:29.845Z] =================================================================================================================== 00:08:16.155 [2024-10-17T14:36:29.845Z] Total : 14503.50 56.65 0.00 0.00 0.00 0.00 0.00 00:08:16.155 00:08:16.155 00:08:16.155 Latency(us) 00:08:16.155 [2024-10-17T14:36:29.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.155 Nvme0n1 : 10.01 14506.48 56.67 0.00 0.00 8818.87 4417.61 17864.63 00:08:16.155 [2024-10-17T14:36:29.845Z] =================================================================================================================== 00:08:16.155 [2024-10-17T14:36:29.845Z] Total : 14506.48 56.67 0.00 0.00 8818.87 4417.61 17864.63 00:08:16.155 { 00:08:16.155 "results": [ 00:08:16.155 { 00:08:16.155 "job": "Nvme0n1", 00:08:16.155 "core_mask": "0x2", 00:08:16.155 "workload": "randwrite", 00:08:16.155 "status": "finished", 00:08:16.155 "queue_depth": 128, 00:08:16.155 "io_size": 4096, 00:08:16.155 "runtime": 10.006769, 00:08:16.155 "iops": 14506.480563306697, 00:08:16.155 "mibps": 56.665939700416786, 00:08:16.155 "io_failed": 0, 00:08:16.155 "io_timeout": 0, 00:08:16.155 "avg_latency_us": 8818.866628569009, 00:08:16.155 "min_latency_us": 4417.6118518518515, 00:08:16.155 "max_latency_us": 17864.62814814815 00:08:16.155 } 00:08:16.155 ], 00:08:16.155 "core_count": 1 00:08:16.155 } 00:08:16.155 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2259588 00:08:16.155 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2259588 ']' 00:08:16.155 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2259588 00:08:16.155 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:16.155 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.155 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2259588 00:08:16.155 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:16.155 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:16.155 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2259588' 00:08:16.155 killing process with pid 2259588 00:08:16.155 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2259588 00:08:16.155 Received shutdown signal, test time was about 10.000000 seconds 00:08:16.155 00:08:16.155 Latency(us) 00:08:16.155 [2024-10-17T14:36:29.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.155 [2024-10-17T14:36:29.845Z] =================================================================================================================== 00:08:16.155 [2024-10-17T14:36:29.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:16.155 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2259588 00:08:16.416 16:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.674 16:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:16.931 16:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6ce989b-46c9-47d5-9468-962c4b7ab3f9 00:08:16.931 16:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:17.191 16:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:17.191 16:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:17.191 16:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:17.451 [2024-10-17 16:36:31.109903] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:17.712 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6ce989b-46c9-47d5-9468-962c4b7ab3f9 00:08:17.712 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:17.712 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6ce989b-46c9-47d5-9468-962c4b7ab3f9 00:08:17.712 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.712 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.712 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.712 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.712 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.712 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.712 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.712 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:17.712 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6ce989b-46c9-47d5-9468-962c4b7ab3f9 00:08:17.973 request: 00:08:17.973 { 00:08:17.973 "uuid": "e6ce989b-46c9-47d5-9468-962c4b7ab3f9", 00:08:17.973 "method": "bdev_lvol_get_lvstores", 00:08:17.973 "req_id": 1 00:08:17.973 } 00:08:17.973 Got JSON-RPC error response 00:08:17.973 response: 00:08:17.973 { 00:08:17.973 "code": -19, 00:08:17.973 "message": "No such device" 00:08:17.973 } 00:08:17.973 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:17.973 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.973 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:17.973 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.973 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:18.233 aio_bdev 00:08:18.233 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d7fd3715-3884-410d-9d5e-17b7944c684f 00:08:18.233 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=d7fd3715-3884-410d-9d5e-17b7944c684f 00:08:18.233 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:18.233 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:18.233 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:18.233 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:18.233 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:18.491 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d7fd3715-3884-410d-9d5e-17b7944c684f -t 2000 00:08:18.749 [ 00:08:18.749 { 00:08:18.749 "name": "d7fd3715-3884-410d-9d5e-17b7944c684f", 00:08:18.749 "aliases": [ 00:08:18.749 "lvs/lvol" 00:08:18.749 ], 00:08:18.749 "product_name": "Logical Volume", 00:08:18.749 "block_size": 4096, 00:08:18.749 "num_blocks": 38912, 00:08:18.749 "uuid": "d7fd3715-3884-410d-9d5e-17b7944c684f", 00:08:18.749 "assigned_rate_limits": { 00:08:18.749 "rw_ios_per_sec": 0, 00:08:18.749 "rw_mbytes_per_sec": 0, 00:08:18.749 "r_mbytes_per_sec": 0, 00:08:18.749 "w_mbytes_per_sec": 0 00:08:18.749 }, 00:08:18.749 "claimed": false, 00:08:18.749 "zoned": false, 00:08:18.749 "supported_io_types": { 00:08:18.749 "read": true, 00:08:18.749 "write": true, 00:08:18.749 "unmap": true, 00:08:18.749 "flush": false, 00:08:18.749 "reset": true, 00:08:18.749 "nvme_admin": false, 00:08:18.749 "nvme_io": false, 00:08:18.749 "nvme_io_md": false, 00:08:18.749 "write_zeroes": true, 00:08:18.749 "zcopy": false, 00:08:18.749 "get_zone_info": false, 00:08:18.749 "zone_management": false, 00:08:18.749 "zone_append": false, 00:08:18.749 "compare": false, 00:08:18.749 "compare_and_write": false, 00:08:18.749 "abort": false, 00:08:18.749 "seek_hole": true, 00:08:18.749 "seek_data": true, 00:08:18.749 "copy": false, 00:08:18.749 "nvme_iov_md": false 00:08:18.749 }, 00:08:18.749 "driver_specific": { 00:08:18.749 "lvol": { 00:08:18.749 "lvol_store_uuid": "e6ce989b-46c9-47d5-9468-962c4b7ab3f9", 00:08:18.749 "base_bdev": "aio_bdev", 00:08:18.749 "thin_provision": false, 00:08:18.749 "num_allocated_clusters": 38, 00:08:18.749 "snapshot": false, 00:08:18.749 "clone": false, 00:08:18.749 "esnap_clone": false 00:08:18.749 } 00:08:18.749 } 00:08:18.749 } 00:08:18.749 ] 00:08:18.749 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:18.750 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6ce989b-46c9-47d5-9468-962c4b7ab3f9 00:08:18.750 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:19.009 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:19.009 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6ce989b-46c9-47d5-9468-962c4b7ab3f9 00:08:19.009 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:19.269 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:19.269 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d7fd3715-3884-410d-9d5e-17b7944c684f 00:08:19.528 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e6ce989b-46c9-47d5-9468-962c4b7ab3f9 00:08:19.788 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:20.047 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:20.047 00:08:20.047 real 0m17.990s 00:08:20.047 user 0m17.556s 00:08:20.047 sys 0m1.810s 00:08:20.047 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.047 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:20.047 ************************************ 00:08:20.047 END TEST lvs_grow_clean 00:08:20.047 ************************************ 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:20.305 ************************************ 00:08:20.305 START TEST lvs_grow_dirty 00:08:20.305 ************************************ 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:20.305 16:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:20.564 16:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:20.564 16:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:20.823 16:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=50d4620a-604d-4a95-a51e-61db88441fb8 00:08:20.823 16:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:20.823 16:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:21.082 16:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:21.082 16:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:21.082 16:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 50d4620a-604d-4a95-a51e-61db88441fb8 lvol 150 00:08:21.342 16:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f6c1a852-b6d8-413b-a758-b8eea67b274f 00:08:21.342 16:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:21.342 16:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:21.601 [2024-10-17 16:36:35.232679] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:21.601 [2024-10-17 16:36:35.232777] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:21.601 true 00:08:21.601 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:21.601 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:21.859 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:21.859 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:22.427 16:36:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f6c1a852-b6d8-413b-a758-b8eea67b274f 00:08:22.427 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:22.687 [2024-10-17 16:36:36.336054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.687 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.946 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2261664 00:08:22.946 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:22.946 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2261664 /var/tmp/bdevperf.sock 00:08:22.946 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:22.946 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2261664 ']' 00:08:22.946 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:22.946 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.946 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:22.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:22.946 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.946 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:23.204 [2024-10-17 16:36:36.673576] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:08:23.204 [2024-10-17 16:36:36.673656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261664 ] 00:08:23.204 [2024-10-17 16:36:36.735553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.204 [2024-10-17 16:36:36.798270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.462 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.462 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:23.462 16:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:23.725 Nvme0n1 00:08:23.725 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:23.989 [ 00:08:23.989 { 00:08:23.989 "name": "Nvme0n1", 00:08:23.989 "aliases": [ 00:08:23.989 "f6c1a852-b6d8-413b-a758-b8eea67b274f" 00:08:23.989 ], 00:08:23.989 "product_name": "NVMe disk", 00:08:23.989 "block_size": 4096, 00:08:23.989 "num_blocks": 38912, 00:08:23.989 "uuid": "f6c1a852-b6d8-413b-a758-b8eea67b274f", 00:08:23.989 "numa_id": 0, 00:08:23.989 "assigned_rate_limits": { 00:08:23.989 "rw_ios_per_sec": 0, 00:08:23.989 "rw_mbytes_per_sec": 0, 00:08:23.989 "r_mbytes_per_sec": 0, 00:08:23.989 "w_mbytes_per_sec": 0 00:08:23.989 }, 00:08:23.989 "claimed": false, 00:08:23.989 "zoned": false, 00:08:23.989 "supported_io_types": { 00:08:23.989 "read": true, 00:08:23.989 "write": true, 00:08:23.989 "unmap": true, 00:08:23.989 "flush": true, 00:08:23.989 "reset": true, 00:08:23.989 "nvme_admin": true, 00:08:23.989 "nvme_io": true, 00:08:23.989 "nvme_io_md": false, 00:08:23.989 "write_zeroes": true, 00:08:23.989 "zcopy": false, 00:08:23.989 "get_zone_info": false, 00:08:23.989 "zone_management": false, 00:08:23.989 "zone_append": false, 00:08:23.989 "compare": true, 00:08:23.989 "compare_and_write": true, 00:08:23.989 "abort": true, 00:08:23.989 "seek_hole": false, 00:08:23.989 "seek_data": false, 00:08:23.989 "copy": true, 00:08:23.989 "nvme_iov_md": false 00:08:23.989 }, 00:08:23.989 "memory_domains": [ 00:08:23.989 { 00:08:23.989 "dma_device_id": "system", 00:08:23.989 "dma_device_type": 1 00:08:23.989 } 00:08:23.989 ], 00:08:23.989 "driver_specific": { 00:08:23.989 "nvme": [ 00:08:23.989 { 00:08:23.989 "trid": { 00:08:23.989 "trtype": "TCP", 00:08:23.989 "adrfam": "IPv4", 00:08:23.989 "traddr": "10.0.0.2", 00:08:23.989 "trsvcid": "4420", 00:08:23.989 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:23.989 }, 00:08:23.989 "ctrlr_data": { 00:08:23.989 "cntlid": 1, 00:08:23.989 "vendor_id": "0x8086", 00:08:23.989 "model_number": "SPDK bdev Controller", 00:08:23.989 "serial_number": "SPDK0", 00:08:23.989 "firmware_revision": "25.01", 00:08:23.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:23.989 "oacs": { 00:08:23.989 "security": 0, 00:08:23.989 "format": 0, 00:08:23.989 "firmware": 0, 00:08:23.989 "ns_manage": 0 00:08:23.989 }, 00:08:23.989 "multi_ctrlr": true, 00:08:23.989 "ana_reporting": false 00:08:23.989 }, 00:08:23.989 "vs": { 00:08:23.989 "nvme_version": "1.3" 00:08:23.989 }, 00:08:23.989 "ns_data": { 00:08:23.989 "id": 1, 00:08:23.989 "can_share": true 00:08:23.989 } 00:08:23.989 } 00:08:23.989 ], 00:08:23.989 "mp_policy": "active_passive" 00:08:23.989 } 00:08:23.989 } 00:08:23.989 ] 00:08:23.989 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2261790 00:08:23.989 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:23.989 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:23.989 Running I/O for 10 seconds... 00:08:24.928 Latency(us) 00:08:24.928 [2024-10-17T14:36:38.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.928 Nvme0n1 : 1.00 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:08:24.928 [2024-10-17T14:36:38.618Z] =================================================================================================================== 00:08:24.928 [2024-10-17T14:36:38.618Z] Total : 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:08:24.928 00:08:25.865 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:26.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.122 Nvme0n1 : 2.00 14034.00 54.82 0.00 0.00 0.00 0.00 0.00 00:08:26.122 [2024-10-17T14:36:39.812Z] =================================================================================================================== 00:08:26.122 [2024-10-17T14:36:39.812Z] Total : 14034.00 54.82 0.00 0.00 0.00 0.00 0.00 00:08:26.122 00:08:26.122 true 00:08:26.122 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:26.122 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:26.690 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:26.690 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:26.690 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2261790 00:08:26.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.950 Nvme0n1 : 3.00 14118.67 55.15 0.00 0.00 0.00 0.00 0.00 00:08:26.950 [2024-10-17T14:36:40.640Z] =================================================================================================================== 00:08:26.950 [2024-10-17T14:36:40.640Z] Total : 14118.67 55.15 0.00 0.00 0.00 0.00 0.00 00:08:26.950 00:08:28.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.330 Nvme0n1 : 4.00 14241.50 55.63 0.00 0.00 0.00 0.00 0.00 00:08:28.330 [2024-10-17T14:36:42.020Z] =================================================================================================================== 00:08:28.330 [2024-10-17T14:36:42.020Z] Total : 14241.50 55.63 0.00 0.00 0.00 0.00 0.00 00:08:28.330 00:08:29.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.269 Nvme0n1 : 5.00 14327.20 55.97 0.00 0.00 0.00 0.00 0.00 00:08:29.269 [2024-10-17T14:36:42.959Z] =================================================================================================================== 00:08:29.269 [2024-10-17T14:36:42.959Z] Total : 14327.20 55.97 0.00 0.00 0.00 0.00 0.00 00:08:29.269 00:08:30.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.207 Nvme0n1 : 6.00 14395.17 56.23 0.00 0.00 0.00 0.00 0.00 00:08:30.207 [2024-10-17T14:36:43.897Z] =================================================================================================================== 00:08:30.207 [2024-10-17T14:36:43.897Z] Total : 14395.17 56.23 0.00 0.00 0.00 0.00 0.00 00:08:30.207 00:08:31.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.145 Nvme0n1 : 7.00 14461.43 56.49 0.00 0.00 0.00 0.00 0.00 00:08:31.145 [2024-10-17T14:36:44.835Z] =================================================================================================================== 00:08:31.145 [2024-10-17T14:36:44.836Z] Total : 14461.43 56.49 0.00 0.00 0.00 0.00 0.00 00:08:31.146 00:08:32.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.084 Nvme0n1 : 8.00 14515.38 56.70 0.00 0.00 0.00 0.00 0.00 00:08:32.084 [2024-10-17T14:36:45.774Z] =================================================================================================================== 00:08:32.084 [2024-10-17T14:36:45.774Z] Total : 14515.38 56.70 0.00 0.00 0.00 0.00 0.00 00:08:32.084 00:08:33.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.022 Nvme0n1 : 9.00 14567.67 56.90 0.00 0.00 0.00 0.00 0.00 00:08:33.022 [2024-10-17T14:36:46.713Z] =================================================================================================================== 00:08:33.023 [2024-10-17T14:36:46.713Z] Total : 14567.67 56.90 0.00 0.00 0.00 0.00 0.00 00:08:33.023 00:08:33.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.978 Nvme0n1 : 10.00 14596.80 57.02 0.00 0.00 0.00 0.00 0.00 00:08:33.978 [2024-10-17T14:36:47.668Z] =================================================================================================================== 00:08:33.978 [2024-10-17T14:36:47.668Z] Total : 14596.80 57.02 0.00 0.00 0.00 0.00 0.00 00:08:33.978 00:08:33.978 00:08:33.978 Latency(us) 00:08:33.978 [2024-10-17T14:36:47.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.978 Nvme0n1 : 10.01 14602.06 57.04 0.00 0.00 8760.95 4757.43 16990.81 00:08:33.978 [2024-10-17T14:36:47.668Z] =================================================================================================================== 00:08:33.978 [2024-10-17T14:36:47.668Z] Total : 14602.06 57.04 0.00 0.00 8760.95 4757.43 16990.81 00:08:33.978 { 00:08:33.978 "results": [ 00:08:33.978 { 00:08:33.978 "job": "Nvme0n1", 00:08:33.978 "core_mask": "0x2", 00:08:33.978 "workload": "randwrite", 00:08:33.978 "status": "finished", 00:08:33.978 "queue_depth": 128, 00:08:33.978 "io_size": 4096, 00:08:33.978 "runtime": 10.005161, 00:08:33.978 "iops": 14602.063874834199, 00:08:33.978 "mibps": 57.03931201107109, 00:08:33.978 "io_failed": 0, 00:08:33.978 "io_timeout": 0, 00:08:33.978 "avg_latency_us": 8760.949696637826, 00:08:33.978 "min_latency_us": 4757.4281481481485, 00:08:33.978 "max_latency_us": 16990.814814814814 00:08:33.978 } 00:08:33.978 ], 00:08:33.978 "core_count": 1 00:08:33.978 } 00:08:33.979 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2261664 00:08:34.257 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2261664 ']' 00:08:34.257 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2261664 00:08:34.257 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:34.257 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.257 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2261664 00:08:34.257 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:34.257 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:34.257 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2261664' 00:08:34.257 killing process with pid 2261664 00:08:34.257 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2261664 00:08:34.257 Received shutdown signal, test time was about 10.000000 seconds 00:08:34.257 00:08:34.257 Latency(us) 00:08:34.257 [2024-10-17T14:36:47.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.257 [2024-10-17T14:36:47.947Z] =================================================================================================================== 00:08:34.257 [2024-10-17T14:36:47.947Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:34.257 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2261664 00:08:34.257 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.522 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:35.090 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:35.090 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:35.090 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:35.090 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:35.090 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2259022 00:08:35.090 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2259022 00:08:35.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2259022 Killed "${NVMF_APP[@]}" "$@" 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2263133 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2263133 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2263133 ']' 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.350 16:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:35.350 [2024-10-17 16:36:48.847324] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:08:35.350 [2024-10-17 16:36:48.847423] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.350 [2024-10-17 16:36:48.912431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.350 [2024-10-17 16:36:48.970701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.350 [2024-10-17 16:36:48.970759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.350 [2024-10-17 16:36:48.970787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.350 [2024-10-17 16:36:48.970798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.350 [2024-10-17 16:36:48.970807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.350 [2024-10-17 16:36:48.971393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.609 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.609 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:35.609 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:35.609 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:35.609 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:35.609 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.609 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:35.867 [2024-10-17 16:36:49.417800] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:35.867 [2024-10-17 16:36:49.417939] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:35.867 [2024-10-17 16:36:49.417997] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:35.867 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:35.867 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f6c1a852-b6d8-413b-a758-b8eea67b274f 00:08:35.867 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f6c1a852-b6d8-413b-a758-b8eea67b274f 00:08:35.867 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.867 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:35.867 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.867 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.867 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:36.126 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f6c1a852-b6d8-413b-a758-b8eea67b274f -t 2000 00:08:36.386 [ 00:08:36.386 { 00:08:36.386 "name": "f6c1a852-b6d8-413b-a758-b8eea67b274f", 00:08:36.386 "aliases": [ 00:08:36.386 "lvs/lvol" 00:08:36.386 ], 00:08:36.386 "product_name": "Logical Volume", 00:08:36.386 "block_size": 4096, 00:08:36.386 "num_blocks": 38912, 00:08:36.386 "uuid": "f6c1a852-b6d8-413b-a758-b8eea67b274f", 00:08:36.386 "assigned_rate_limits": { 00:08:36.386 "rw_ios_per_sec": 0, 00:08:36.386 "rw_mbytes_per_sec": 0, 00:08:36.386 "r_mbytes_per_sec": 0, 00:08:36.386 "w_mbytes_per_sec": 0 00:08:36.386 }, 00:08:36.386 "claimed": false, 00:08:36.386 "zoned": false, 00:08:36.386 "supported_io_types": { 00:08:36.386 "read": true, 00:08:36.386 "write": true, 00:08:36.386 "unmap": true, 00:08:36.386 "flush": false, 00:08:36.386 "reset": true, 00:08:36.386 "nvme_admin": false, 00:08:36.386 "nvme_io": false, 00:08:36.386 "nvme_io_md": false, 00:08:36.386 "write_zeroes": true, 00:08:36.386 "zcopy": false, 00:08:36.386 "get_zone_info": false, 00:08:36.386 "zone_management": false, 00:08:36.386 "zone_append": false, 00:08:36.386 "compare": false, 00:08:36.386 "compare_and_write": false, 00:08:36.386 "abort": false, 00:08:36.386 "seek_hole": true, 00:08:36.386 "seek_data": true, 00:08:36.386 "copy": false, 00:08:36.386 "nvme_iov_md": false 00:08:36.386 }, 00:08:36.386 "driver_specific": { 00:08:36.386 "lvol": { 00:08:36.386 "lvol_store_uuid": "50d4620a-604d-4a95-a51e-61db88441fb8", 00:08:36.386 "base_bdev": "aio_bdev", 00:08:36.386 "thin_provision": false, 00:08:36.386 "num_allocated_clusters": 38, 00:08:36.386 "snapshot": false, 00:08:36.386 "clone": false, 00:08:36.386 "esnap_clone": false 00:08:36.386 } 00:08:36.386 } 00:08:36.386 } 00:08:36.386 ] 00:08:36.386 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:36.386 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:36.386 16:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:36.645 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:36.645 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:36.645 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:36.904 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:36.904 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:37.164 [2024-10-17 16:36:50.811313] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:37.164 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:37.164 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:37.164 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:37.164 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.164 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.164 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.164 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.164 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.164 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.164 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.164 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:37.164 16:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:37.731 request: 00:08:37.731 { 00:08:37.731 "uuid": "50d4620a-604d-4a95-a51e-61db88441fb8", 00:08:37.731 "method": "bdev_lvol_get_lvstores", 00:08:37.731 "req_id": 1 00:08:37.731 } 00:08:37.731 Got JSON-RPC error response 00:08:37.731 response: 00:08:37.731 { 00:08:37.731 "code": -19, 00:08:37.731 "message": "No such device" 00:08:37.731 } 00:08:37.731 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:37.731 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:37.731 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:37.731 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:37.731 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.731 aio_bdev 00:08:37.990 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f6c1a852-b6d8-413b-a758-b8eea67b274f 00:08:37.990 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f6c1a852-b6d8-413b-a758-b8eea67b274f 00:08:37.990 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.990 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:37.990 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.990 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.990 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:38.249 16:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f6c1a852-b6d8-413b-a758-b8eea67b274f -t 2000 00:08:38.510 [ 00:08:38.510 { 00:08:38.510 "name": "f6c1a852-b6d8-413b-a758-b8eea67b274f", 00:08:38.510 "aliases": [ 00:08:38.510 "lvs/lvol" 00:08:38.510 ], 00:08:38.510 "product_name": "Logical Volume", 00:08:38.510 "block_size": 4096, 00:08:38.510 "num_blocks": 38912, 00:08:38.510 "uuid": "f6c1a852-b6d8-413b-a758-b8eea67b274f", 00:08:38.510 "assigned_rate_limits": { 00:08:38.510 "rw_ios_per_sec": 0, 00:08:38.510 "rw_mbytes_per_sec": 0, 00:08:38.510 "r_mbytes_per_sec": 0, 00:08:38.510 "w_mbytes_per_sec": 0 00:08:38.510 }, 00:08:38.510 "claimed": false, 00:08:38.510 "zoned": false, 00:08:38.510 "supported_io_types": { 00:08:38.510 "read": true, 00:08:38.510 "write": true, 00:08:38.510 "unmap": true, 00:08:38.510 "flush": false, 00:08:38.510 "reset": true, 00:08:38.510 "nvme_admin": false, 00:08:38.510 "nvme_io": false, 00:08:38.510 "nvme_io_md": false, 00:08:38.510 "write_zeroes": true, 00:08:38.510 "zcopy": false, 00:08:38.510 "get_zone_info": false, 00:08:38.510 "zone_management": false, 00:08:38.510 "zone_append": false, 00:08:38.510 "compare": false, 00:08:38.510 "compare_and_write": false, 00:08:38.510 "abort": false, 00:08:38.510 "seek_hole": true, 00:08:38.510 "seek_data": true, 00:08:38.510 "copy": false, 00:08:38.510 "nvme_iov_md": false 00:08:38.510 }, 00:08:38.510 "driver_specific": { 00:08:38.510 "lvol": { 00:08:38.510 "lvol_store_uuid": "50d4620a-604d-4a95-a51e-61db88441fb8", 00:08:38.510 "base_bdev": "aio_bdev", 00:08:38.510 "thin_provision": false, 00:08:38.510 "num_allocated_clusters": 38, 00:08:38.510 "snapshot": false, 00:08:38.510 "clone": false, 00:08:38.510 "esnap_clone": false 00:08:38.510 } 00:08:38.510 } 00:08:38.510 } 00:08:38.510 ] 00:08:38.510 16:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:38.510 16:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:38.510 16:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:38.768 16:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:38.768 16:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:38.768 16:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:39.028 16:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:39.028 16:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f6c1a852-b6d8-413b-a758-b8eea67b274f 00:08:39.286 16:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50d4620a-604d-4a95-a51e-61db88441fb8 00:08:39.545 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.804 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.805 00:08:39.805 real 0m19.677s 00:08:39.805 user 0m49.894s 00:08:39.805 sys 0m4.475s 00:08:39.805 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.805 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:39.805 ************************************ 00:08:39.805 END TEST lvs_grow_dirty 00:08:39.805 ************************************ 00:08:39.805 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:39.805 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:39.805 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:39.805 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:39.805 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:39.805 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:39.805 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:39.805 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:39.805 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:39.805 nvmf_trace.0 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:40.063 rmmod nvme_tcp 00:08:40.063 rmmod nvme_fabrics 00:08:40.063 rmmod nvme_keyring 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2263133 ']' 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2263133 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2263133 ']' 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2263133 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2263133 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2263133' 00:08:40.063 killing process with pid 2263133 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2263133 00:08:40.063 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2263133 00:08:40.341 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:40.341 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:40.341 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:40.341 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:40.341 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:40.341 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:40.341 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:40.341 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:40.341 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:40.341 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.341 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.341 16:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.340 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.340 00:08:42.340 real 0m43.110s 00:08:42.340 user 1m13.719s 00:08:42.340 sys 0m8.222s 00:08:42.340 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.340 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:42.340 ************************************ 00:08:42.340 END TEST nvmf_lvs_grow 00:08:42.340 ************************************ 00:08:42.340 16:36:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:42.340 16:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:42.340 16:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.340 16:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.340 ************************************ 00:08:42.340 START TEST nvmf_bdev_io_wait 00:08:42.340 ************************************ 00:08:42.340 16:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:42.340 * Looking for test storage... 00:08:42.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.340 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:42.340 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:42.340 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:42.599 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:42.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.600 --rc genhtml_branch_coverage=1 00:08:42.600 --rc genhtml_function_coverage=1 00:08:42.600 --rc genhtml_legend=1 00:08:42.600 --rc geninfo_all_blocks=1 00:08:42.600 --rc geninfo_unexecuted_blocks=1 00:08:42.600 00:08:42.600 ' 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:42.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.600 --rc genhtml_branch_coverage=1 00:08:42.600 --rc genhtml_function_coverage=1 00:08:42.600 --rc genhtml_legend=1 00:08:42.600 --rc geninfo_all_blocks=1 00:08:42.600 --rc geninfo_unexecuted_blocks=1 00:08:42.600 00:08:42.600 ' 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:42.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.600 --rc genhtml_branch_coverage=1 00:08:42.600 --rc genhtml_function_coverage=1 00:08:42.600 --rc genhtml_legend=1 00:08:42.600 --rc geninfo_all_blocks=1 00:08:42.600 --rc geninfo_unexecuted_blocks=1 00:08:42.600 00:08:42.600 ' 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:42.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.600 --rc genhtml_branch_coverage=1 00:08:42.600 --rc genhtml_function_coverage=1 00:08:42.600 --rc genhtml_legend=1 00:08:42.600 --rc geninfo_all_blocks=1 00:08:42.600 --rc geninfo_unexecuted_blocks=1 00:08:42.600 00:08:42.600 ' 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.600 16:36:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:44.510 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:44.511 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:44.511 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:44.511 Found net devices under 0000:09:00.0: cvl_0_0 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:44.511 Found net devices under 0000:09:00.1: cvl_0_1 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.511 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:44.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:08:44.772 00:08:44.772 --- 10.0.0.2 ping statistics --- 00:08:44.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.772 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:08:44.772 00:08:44.772 --- 10.0.0.1 ping statistics --- 00:08:44.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.772 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2265717 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2265717 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2265717 ']' 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.772 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:44.772 [2024-10-17 16:36:58.295172] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:08:44.772 [2024-10-17 16:36:58.295266] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.772 [2024-10-17 16:36:58.366409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.772 [2024-10-17 16:36:58.429171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.772 [2024-10-17 16:36:58.429226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.772 [2024-10-17 16:36:58.429241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.772 [2024-10-17 16:36:58.429252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.772 [2024-10-17 16:36:58.429262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.772 [2024-10-17 16:36:58.430953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.772 [2024-10-17 16:36:58.431041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.772 [2024-10-17 16:36:58.431036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.772 [2024-10-17 16:36:58.430977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.032 [2024-10-17 16:36:58.630132] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.032 Malloc0 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.032 [2024-10-17 16:36:58.681166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2265817 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2265819 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:45.032 { 00:08:45.032 "params": { 00:08:45.032 "name": "Nvme$subsystem", 00:08:45.032 "trtype": "$TEST_TRANSPORT", 00:08:45.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.032 "adrfam": "ipv4", 00:08:45.032 "trsvcid": "$NVMF_PORT", 00:08:45.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.032 "hdgst": ${hdgst:-false}, 00:08:45.032 "ddgst": ${ddgst:-false} 00:08:45.032 }, 00:08:45.032 "method": "bdev_nvme_attach_controller" 00:08:45.032 } 00:08:45.032 EOF 00:08:45.032 )") 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2265821 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:45.032 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:45.032 { 00:08:45.032 "params": { 00:08:45.032 "name": "Nvme$subsystem", 00:08:45.032 "trtype": "$TEST_TRANSPORT", 00:08:45.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.032 "adrfam": "ipv4", 00:08:45.032 "trsvcid": "$NVMF_PORT", 00:08:45.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.032 "hdgst": ${hdgst:-false}, 00:08:45.032 "ddgst": ${ddgst:-false} 00:08:45.032 }, 00:08:45.032 "method": "bdev_nvme_attach_controller" 00:08:45.033 } 00:08:45.033 EOF 00:08:45.033 )") 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2265824 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:45.033 { 00:08:45.033 "params": { 00:08:45.033 "name": "Nvme$subsystem", 00:08:45.033 "trtype": "$TEST_TRANSPORT", 00:08:45.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.033 "adrfam": "ipv4", 00:08:45.033 "trsvcid": "$NVMF_PORT", 00:08:45.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.033 "hdgst": ${hdgst:-false}, 00:08:45.033 "ddgst": ${ddgst:-false} 00:08:45.033 }, 00:08:45.033 "method": "bdev_nvme_attach_controller" 00:08:45.033 } 00:08:45.033 EOF 00:08:45.033 )") 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:45.033 { 00:08:45.033 "params": { 00:08:45.033 "name": "Nvme$subsystem", 00:08:45.033 "trtype": "$TEST_TRANSPORT", 00:08:45.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.033 "adrfam": "ipv4", 00:08:45.033 "trsvcid": "$NVMF_PORT", 00:08:45.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.033 "hdgst": ${hdgst:-false}, 00:08:45.033 "ddgst": ${ddgst:-false} 00:08:45.033 }, 00:08:45.033 "method": "bdev_nvme_attach_controller" 00:08:45.033 } 00:08:45.033 EOF 00:08:45.033 )") 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2265817 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:45.033 "params": { 00:08:45.033 "name": "Nvme1", 00:08:45.033 "trtype": "tcp", 00:08:45.033 "traddr": "10.0.0.2", 00:08:45.033 "adrfam": "ipv4", 00:08:45.033 "trsvcid": "4420", 00:08:45.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.033 "hdgst": false, 00:08:45.033 "ddgst": false 00:08:45.033 }, 00:08:45.033 "method": "bdev_nvme_attach_controller" 00:08:45.033 }' 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:45.033 "params": { 00:08:45.033 "name": "Nvme1", 00:08:45.033 "trtype": "tcp", 00:08:45.033 "traddr": "10.0.0.2", 00:08:45.033 "adrfam": "ipv4", 00:08:45.033 "trsvcid": "4420", 00:08:45.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.033 "hdgst": false, 00:08:45.033 "ddgst": false 00:08:45.033 }, 00:08:45.033 "method": "bdev_nvme_attach_controller" 00:08:45.033 }' 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:45.033 "params": { 00:08:45.033 "name": "Nvme1", 00:08:45.033 "trtype": "tcp", 00:08:45.033 "traddr": "10.0.0.2", 00:08:45.033 "adrfam": "ipv4", 00:08:45.033 "trsvcid": "4420", 00:08:45.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.033 "hdgst": false, 00:08:45.033 "ddgst": false 00:08:45.033 }, 00:08:45.033 "method": "bdev_nvme_attach_controller" 00:08:45.033 }' 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:45.033 16:36:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:45.033 "params": { 00:08:45.033 "name": "Nvme1", 00:08:45.033 "trtype": "tcp", 00:08:45.033 "traddr": "10.0.0.2", 00:08:45.033 "adrfam": "ipv4", 00:08:45.033 "trsvcid": "4420", 00:08:45.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.033 "hdgst": false, 00:08:45.033 "ddgst": false 00:08:45.033 }, 00:08:45.033 "method": "bdev_nvme_attach_controller" 00:08:45.033 }' 00:08:45.292 [2024-10-17 16:36:58.732739] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:08:45.292 [2024-10-17 16:36:58.732740] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:08:45.292 [2024-10-17 16:36:58.732739] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:08:45.292 [2024-10-17 16:36:58.732740] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:08:45.292 [2024-10-17 16:36:58.732835] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-17 16:36:58.732836] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-17 16:36:58.732837] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-17 16:36:58.732837] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:45.292 --proc-type=auto ] 00:08:45.292 --proc-type=auto ] 00:08:45.292 --proc-type=auto ] 00:08:45.292 [2024-10-17 16:36:58.899654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.292 [2024-10-17 16:36:58.954359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:45.551 [2024-10-17 16:36:59.000586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.551 [2024-10-17 16:36:59.055508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:45.551 [2024-10-17 16:36:59.101527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.551 [2024-10-17 16:36:59.156960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:45.551 [2024-10-17 16:36:59.205869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.811 [2024-10-17 16:36:59.262246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:45.811 Running I/O for 1 seconds... 00:08:45.811 Running I/O for 1 seconds... 00:08:45.811 Running I/O for 1 seconds... 00:08:46.070 Running I/O for 1 seconds... 00:08:47.009 6633.00 IOPS, 25.91 MiB/s 00:08:47.010 Latency(us) 00:08:47.010 [2024-10-17T14:37:00.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.010 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:47.010 Nvme1n1 : 1.02 6613.45 25.83 0.00 0.00 19055.58 7524.50 37671.06 00:08:47.010 [2024-10-17T14:37:00.700Z] =================================================================================================================== 00:08:47.010 [2024-10-17T14:37:00.700Z] Total : 6613.45 25.83 0.00 0.00 19055.58 7524.50 37671.06 00:08:47.010 9117.00 IOPS, 35.61 MiB/s 00:08:47.010 Latency(us) 00:08:47.010 [2024-10-17T14:37:00.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.010 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:47.010 Nvme1n1 : 1.01 9157.10 35.77 0.00 0.00 13905.64 8301.23 23884.23 00:08:47.010 [2024-10-17T14:37:00.700Z] =================================================================================================================== 00:08:47.010 [2024-10-17T14:37:00.700Z] Total : 9157.10 35.77 0.00 0.00 13905.64 8301.23 23884.23 00:08:47.010 6025.00 IOPS, 23.54 MiB/s 00:08:47.010 Latency(us) 00:08:47.010 [2024-10-17T14:37:00.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.010 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:47.010 Nvme1n1 : 1.01 6129.57 23.94 0.00 0.00 20814.89 4611.79 46797.56 00:08:47.010 [2024-10-17T14:37:00.700Z] =================================================================================================================== 00:08:47.010 [2024-10-17T14:37:00.700Z] Total : 6129.57 23.94 0.00 0.00 20814.89 4611.79 46797.56 00:08:47.010 188784.00 IOPS, 737.44 MiB/s 00:08:47.010 Latency(us) 00:08:47.010 [2024-10-17T14:37:00.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.010 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:47.010 Nvme1n1 : 1.00 188425.89 736.04 0.00 0.00 675.68 314.03 1881.13 00:08:47.010 [2024-10-17T14:37:00.700Z] =================================================================================================================== 00:08:47.010 [2024-10-17T14:37:00.700Z] Total : 188425.89 736.04 0.00 0.00 675.68 314.03 1881.13 00:08:47.010 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2265819 00:08:47.010 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2265821 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2265824 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.269 rmmod nvme_tcp 00:08:47.269 rmmod nvme_fabrics 00:08:47.269 rmmod nvme_keyring 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2265717 ']' 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2265717 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2265717 ']' 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2265717 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2265717 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2265717' 00:08:47.269 killing process with pid 2265717 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2265717 00:08:47.269 16:37:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2265717 00:08:47.528 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:47.528 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:47.528 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:47.528 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:47.528 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:47.528 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:47.528 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:47.528 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.528 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:47.528 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.528 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.528 16:37:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.438 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.438 00:08:49.438 real 0m7.134s 00:08:49.438 user 0m16.242s 00:08:49.438 sys 0m3.450s 00:08:49.438 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.438 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.438 ************************************ 00:08:49.438 END TEST nvmf_bdev_io_wait 00:08:49.438 ************************************ 00:08:49.438 16:37:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:49.438 16:37:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.438 16:37:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.438 16:37:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.697 ************************************ 00:08:49.697 START TEST nvmf_queue_depth 00:08:49.697 ************************************ 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:49.697 * Looking for test storage... 00:08:49.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:49.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.697 --rc genhtml_branch_coverage=1 00:08:49.697 --rc genhtml_function_coverage=1 00:08:49.697 --rc genhtml_legend=1 00:08:49.697 --rc geninfo_all_blocks=1 00:08:49.697 --rc geninfo_unexecuted_blocks=1 00:08:49.697 00:08:49.697 ' 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:49.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.697 --rc genhtml_branch_coverage=1 00:08:49.697 --rc genhtml_function_coverage=1 00:08:49.697 --rc genhtml_legend=1 00:08:49.697 --rc geninfo_all_blocks=1 00:08:49.697 --rc geninfo_unexecuted_blocks=1 00:08:49.697 00:08:49.697 ' 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:49.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.697 --rc genhtml_branch_coverage=1 00:08:49.697 --rc genhtml_function_coverage=1 00:08:49.697 --rc genhtml_legend=1 00:08:49.697 --rc geninfo_all_blocks=1 00:08:49.697 --rc geninfo_unexecuted_blocks=1 00:08:49.697 00:08:49.697 ' 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:49.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.697 --rc genhtml_branch_coverage=1 00:08:49.697 --rc genhtml_function_coverage=1 00:08:49.697 --rc genhtml_legend=1 00:08:49.697 --rc geninfo_all_blocks=1 00:08:49.697 --rc geninfo_unexecuted_blocks=1 00:08:49.697 00:08:49.697 ' 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.697 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:49.698 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.233 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:52.233 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:52.234 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:52.234 Found net devices under 0000:09:00.0: cvl_0_0 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:52.234 Found net devices under 0000:09:00.1: cvl_0_1 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:52.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:08:52.234 00:08:52.234 --- 10.0.0.2 ping statistics --- 00:08:52.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.234 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:08:52.234 00:08:52.234 --- 10.0.0.1 ping statistics --- 00:08:52.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.234 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2268050 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2268050 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2268050 ']' 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.234 [2024-10-17 16:37:05.603761] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:08:52.234 [2024-10-17 16:37:05.603841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.234 [2024-10-17 16:37:05.676223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.234 [2024-10-17 16:37:05.736754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.234 [2024-10-17 16:37:05.736823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.234 [2024-10-17 16:37:05.736838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.234 [2024-10-17 16:37:05.736849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.234 [2024-10-17 16:37:05.736859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.234 [2024-10-17 16:37:05.737499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.234 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.235 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.235 [2024-10-17 16:37:05.890162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.235 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.235 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:52.235 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.235 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.235 Malloc0 00:08:52.235 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.235 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:52.235 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.235 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.495 [2024-10-17 16:37:05.940386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2268075 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2268075 /var/tmp/bdevperf.sock 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2268075 ']' 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:52.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.495 16:37:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.495 [2024-10-17 16:37:05.989368] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:08:52.495 [2024-10-17 16:37:05.989430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268075 ] 00:08:52.495 [2024-10-17 16:37:06.050606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.495 [2024-10-17 16:37:06.113321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.753 16:37:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.753 16:37:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:52.753 16:37:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:52.753 16:37:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.753 16:37:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.753 NVMe0n1 00:08:52.753 16:37:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.754 16:37:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:53.012 Running I/O for 10 seconds... 00:08:54.889 7734.00 IOPS, 30.21 MiB/s [2024-10-17T14:37:09.520Z] 7903.00 IOPS, 30.87 MiB/s [2024-10-17T14:37:10.901Z] 7963.33 IOPS, 31.11 MiB/s [2024-10-17T14:37:11.839Z] 8055.00 IOPS, 31.46 MiB/s [2024-10-17T14:37:12.778Z] 8093.60 IOPS, 31.62 MiB/s [2024-10-17T14:37:13.716Z] 8156.00 IOPS, 31.86 MiB/s [2024-10-17T14:37:14.656Z] 8185.00 IOPS, 31.97 MiB/s [2024-10-17T14:37:15.617Z] 8188.75 IOPS, 31.99 MiB/s [2024-10-17T14:37:16.553Z] 8203.44 IOPS, 32.04 MiB/s [2024-10-17T14:37:16.814Z] 8229.60 IOPS, 32.15 MiB/s 00:09:03.124 Latency(us) 00:09:03.124 [2024-10-17T14:37:16.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.124 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:03.124 Verification LBA range: start 0x0 length 0x4000 00:09:03.124 NVMe0n1 : 10.08 8262.74 32.28 0.00 0.00 123284.19 21165.70 83886.08 00:09:03.124 [2024-10-17T14:37:16.814Z] =================================================================================================================== 00:09:03.124 [2024-10-17T14:37:16.814Z] Total : 8262.74 32.28 0.00 0.00 123284.19 21165.70 83886.08 00:09:03.124 { 00:09:03.124 "results": [ 00:09:03.124 { 00:09:03.124 "job": "NVMe0n1", 00:09:03.124 "core_mask": "0x1", 00:09:03.124 "workload": "verify", 00:09:03.124 "status": "finished", 00:09:03.124 "verify_range": { 00:09:03.124 "start": 0, 00:09:03.124 "length": 16384 00:09:03.124 }, 00:09:03.124 "queue_depth": 1024, 00:09:03.124 "io_size": 4096, 00:09:03.124 "runtime": 10.081765, 00:09:03.124 "iops": 8262.739708771232, 00:09:03.124 "mibps": 32.27632698738763, 00:09:03.124 "io_failed": 0, 00:09:03.124 "io_timeout": 0, 00:09:03.124 "avg_latency_us": 123284.19098343796, 00:09:03.124 "min_latency_us": 21165.70074074074, 00:09:03.124 "max_latency_us": 83886.08 00:09:03.124 } 00:09:03.124 ], 00:09:03.124 "core_count": 1 00:09:03.124 } 00:09:03.124 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2268075 00:09:03.124 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2268075 ']' 00:09:03.124 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2268075 00:09:03.124 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:03.124 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.124 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2268075 00:09:03.124 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:03.124 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:03.124 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2268075' 00:09:03.124 killing process with pid 2268075 00:09:03.124 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2268075 00:09:03.124 Received shutdown signal, test time was about 10.000000 seconds 00:09:03.124 00:09:03.124 Latency(us) 00:09:03.124 [2024-10-17T14:37:16.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.124 [2024-10-17T14:37:16.814Z] =================================================================================================================== 00:09:03.124 [2024-10-17T14:37:16.814Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:03.124 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2268075 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.384 rmmod nvme_tcp 00:09:03.384 rmmod nvme_fabrics 00:09:03.384 rmmod nvme_keyring 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2268050 ']' 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2268050 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2268050 ']' 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2268050 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.384 16:37:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2268050 00:09:03.384 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:03.384 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:03.384 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2268050' 00:09:03.384 killing process with pid 2268050 00:09:03.384 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2268050 00:09:03.384 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2268050 00:09:03.642 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:03.642 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:03.642 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:03.642 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:03.642 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:09:03.642 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:03.642 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:09:03.642 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.642 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:03.642 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.642 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.642 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:06.255 00:09:06.255 real 0m16.184s 00:09:06.255 user 0m22.866s 00:09:06.255 sys 0m3.013s 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.255 ************************************ 00:09:06.255 END TEST nvmf_queue_depth 00:09:06.255 ************************************ 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.255 ************************************ 00:09:06.255 START TEST nvmf_target_multipath 00:09:06.255 ************************************ 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:06.255 * Looking for test storage... 00:09:06.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:06.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.255 --rc genhtml_branch_coverage=1 00:09:06.255 --rc genhtml_function_coverage=1 00:09:06.255 --rc genhtml_legend=1 00:09:06.255 --rc geninfo_all_blocks=1 00:09:06.255 --rc geninfo_unexecuted_blocks=1 00:09:06.255 00:09:06.255 ' 00:09:06.255 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:06.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.255 --rc genhtml_branch_coverage=1 00:09:06.255 --rc genhtml_function_coverage=1 00:09:06.255 --rc genhtml_legend=1 00:09:06.255 --rc geninfo_all_blocks=1 00:09:06.255 --rc geninfo_unexecuted_blocks=1 00:09:06.255 00:09:06.256 ' 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:06.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.256 --rc genhtml_branch_coverage=1 00:09:06.256 --rc genhtml_function_coverage=1 00:09:06.256 --rc genhtml_legend=1 00:09:06.256 --rc geninfo_all_blocks=1 00:09:06.256 --rc geninfo_unexecuted_blocks=1 00:09:06.256 00:09:06.256 ' 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:06.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.256 --rc genhtml_branch_coverage=1 00:09:06.256 --rc genhtml_function_coverage=1 00:09:06.256 --rc genhtml_legend=1 00:09:06.256 --rc geninfo_all_blocks=1 00:09:06.256 --rc geninfo_unexecuted_blocks=1 00:09:06.256 00:09:06.256 ' 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.256 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:08.167 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:08.167 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:08.167 Found net devices under 0000:09:00.0: cvl_0_0 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:08.167 Found net devices under 0000:09:00.1: cvl_0_1 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.167 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:09:08.168 00:09:08.168 --- 10.0.0.2 ping statistics --- 00:09:08.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.168 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:09:08.168 00:09:08.168 --- 10.0.0.1 ping statistics --- 00:09:08.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.168 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:08.168 only one NIC for nvmf test 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.168 rmmod nvme_tcp 00:09:08.168 rmmod nvme_fabrics 00:09:08.168 rmmod nvme_keyring 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.168 16:37:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.081 00:09:10.081 real 0m4.375s 00:09:10.081 user 0m0.833s 00:09:10.081 sys 0m1.534s 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:10.081 ************************************ 00:09:10.081 END TEST nvmf_target_multipath 00:09:10.081 ************************************ 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.081 16:37:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.341 ************************************ 00:09:10.341 START TEST nvmf_zcopy 00:09:10.341 ************************************ 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:10.341 * Looking for test storage... 00:09:10.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:10.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.341 --rc genhtml_branch_coverage=1 00:09:10.341 --rc genhtml_function_coverage=1 00:09:10.341 --rc genhtml_legend=1 00:09:10.341 --rc geninfo_all_blocks=1 00:09:10.341 --rc geninfo_unexecuted_blocks=1 00:09:10.341 00:09:10.341 ' 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:10.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.341 --rc genhtml_branch_coverage=1 00:09:10.341 --rc genhtml_function_coverage=1 00:09:10.341 --rc genhtml_legend=1 00:09:10.341 --rc geninfo_all_blocks=1 00:09:10.341 --rc geninfo_unexecuted_blocks=1 00:09:10.341 00:09:10.341 ' 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:10.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.341 --rc genhtml_branch_coverage=1 00:09:10.341 --rc genhtml_function_coverage=1 00:09:10.341 --rc genhtml_legend=1 00:09:10.341 --rc geninfo_all_blocks=1 00:09:10.341 --rc geninfo_unexecuted_blocks=1 00:09:10.341 00:09:10.341 ' 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:10.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.341 --rc genhtml_branch_coverage=1 00:09:10.341 --rc genhtml_function_coverage=1 00:09:10.341 --rc genhtml_legend=1 00:09:10.341 --rc geninfo_all_blocks=1 00:09:10.341 --rc geninfo_unexecuted_blocks=1 00:09:10.341 00:09:10.341 ' 00:09:10.341 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.342 16:37:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.877 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.877 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:12.877 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:12.877 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:12.878 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:12.878 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:12.878 Found net devices under 0000:09:00.0: cvl_0_0 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:12.878 Found net devices under 0000:09:00.1: cvl_0_1 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.878 16:37:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:12.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:09:12.878 00:09:12.878 --- 10.0.0.2 ping statistics --- 00:09:12.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.878 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:09:12.878 00:09:12.878 --- 10.0.0.1 ping statistics --- 00:09:12.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.878 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:12.878 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2273280 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2273280 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2273280 ']' 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.879 [2024-10-17 16:37:26.191269] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:09:12.879 [2024-10-17 16:37:26.191366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.879 [2024-10-17 16:37:26.255786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.879 [2024-10-17 16:37:26.312665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.879 [2024-10-17 16:37:26.312716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.879 [2024-10-17 16:37:26.312729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.879 [2024-10-17 16:37:26.312739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.879 [2024-10-17 16:37:26.312748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.879 [2024-10-17 16:37:26.313350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.879 [2024-10-17 16:37:26.466622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.879 [2024-10-17 16:37:26.482846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.879 malloc0 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:12.879 { 00:09:12.879 "params": { 00:09:12.879 "name": "Nvme$subsystem", 00:09:12.879 "trtype": "$TEST_TRANSPORT", 00:09:12.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.879 "adrfam": "ipv4", 00:09:12.879 "trsvcid": "$NVMF_PORT", 00:09:12.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.879 "hdgst": ${hdgst:-false}, 00:09:12.879 "ddgst": ${ddgst:-false} 00:09:12.879 }, 00:09:12.879 "method": "bdev_nvme_attach_controller" 00:09:12.879 } 00:09:12.879 EOF 00:09:12.879 )") 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:12.879 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:12.879 "params": { 00:09:12.879 "name": "Nvme1", 00:09:12.879 "trtype": "tcp", 00:09:12.879 "traddr": "10.0.0.2", 00:09:12.879 "adrfam": "ipv4", 00:09:12.879 "trsvcid": "4420", 00:09:12.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.879 "hdgst": false, 00:09:12.879 "ddgst": false 00:09:12.879 }, 00:09:12.879 "method": "bdev_nvme_attach_controller" 00:09:12.879 }' 00:09:13.138 [2024-10-17 16:37:26.570528] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:09:13.138 [2024-10-17 16:37:26.570615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273306 ] 00:09:13.138 [2024-10-17 16:37:26.637515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.138 [2024-10-17 16:37:26.702831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.398 Running I/O for 10 seconds... 00:09:15.279 5066.00 IOPS, 39.58 MiB/s [2024-10-17T14:37:30.348Z] 5136.50 IOPS, 40.13 MiB/s [2024-10-17T14:37:31.287Z] 5143.00 IOPS, 40.18 MiB/s [2024-10-17T14:37:32.227Z] 5132.50 IOPS, 40.10 MiB/s [2024-10-17T14:37:33.166Z] 5150.60 IOPS, 40.24 MiB/s [2024-10-17T14:37:34.104Z] 5153.33 IOPS, 40.26 MiB/s [2024-10-17T14:37:35.044Z] 5159.14 IOPS, 40.31 MiB/s [2024-10-17T14:37:35.981Z] 5156.88 IOPS, 40.29 MiB/s [2024-10-17T14:37:37.363Z] 5156.67 IOPS, 40.29 MiB/s [2024-10-17T14:37:37.363Z] 5169.80 IOPS, 40.39 MiB/s 00:09:23.673 Latency(us) 00:09:23.673 [2024-10-17T14:37:37.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.673 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:23.673 Verification LBA range: start 0x0 length 0x1000 00:09:23.673 Nvme1n1 : 10.02 5171.57 40.40 0.00 0.00 24684.39 4150.61 34564.17 00:09:23.673 [2024-10-17T14:37:37.363Z] =================================================================================================================== 00:09:23.673 [2024-10-17T14:37:37.363Z] Total : 5171.57 40.40 0.00 0.00 24684.39 4150.61 34564.17 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2274624 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:23.673 { 00:09:23.673 "params": { 00:09:23.673 "name": "Nvme$subsystem", 00:09:23.673 "trtype": "$TEST_TRANSPORT", 00:09:23.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.673 "adrfam": "ipv4", 00:09:23.673 "trsvcid": "$NVMF_PORT", 00:09:23.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.673 "hdgst": ${hdgst:-false}, 00:09:23.673 "ddgst": ${ddgst:-false} 00:09:23.673 }, 00:09:23.673 "method": "bdev_nvme_attach_controller" 00:09:23.673 } 00:09:23.673 EOF 00:09:23.673 )") 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:23.673 [2024-10-17 16:37:37.196664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.673 [2024-10-17 16:37:37.196709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:23.673 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:23.673 "params": { 00:09:23.673 "name": "Nvme1", 00:09:23.673 "trtype": "tcp", 00:09:23.673 "traddr": "10.0.0.2", 00:09:23.673 "adrfam": "ipv4", 00:09:23.673 "trsvcid": "4420", 00:09:23.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.673 "hdgst": false, 00:09:23.673 "ddgst": false 00:09:23.673 }, 00:09:23.673 "method": "bdev_nvme_attach_controller" 00:09:23.673 }' 00:09:23.673 [2024-10-17 16:37:37.204632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.673 [2024-10-17 16:37:37.204660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.673 [2024-10-17 16:37:37.212645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.673 [2024-10-17 16:37:37.212669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.673 [2024-10-17 16:37:37.220661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.673 [2024-10-17 16:37:37.220683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.673 [2024-10-17 16:37:37.228679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.673 [2024-10-17 16:37:37.228700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.673 [2024-10-17 16:37:37.236699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.673 [2024-10-17 16:37:37.236719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.673 [2024-10-17 16:37:37.236918] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:09:23.673 [2024-10-17 16:37:37.236976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274624 ] 00:09:23.673 [2024-10-17 16:37:37.244721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.673 [2024-10-17 16:37:37.244742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.673 [2024-10-17 16:37:37.252743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.673 [2024-10-17 16:37:37.252764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.673 [2024-10-17 16:37:37.260765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.673 [2024-10-17 16:37:37.260787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.673 [2024-10-17 16:37:37.268786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.673 [2024-10-17 16:37:37.268806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.673 [2024-10-17 16:37:37.276826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.673 [2024-10-17 16:37:37.276850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.673 [2024-10-17 16:37:37.284847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.673 [2024-10-17 16:37:37.284871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.674 [2024-10-17 16:37:37.292871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.674 [2024-10-17 16:37:37.292895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.674 [2024-10-17 16:37:37.300817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.674 [2024-10-17 16:37:37.300892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.674 [2024-10-17 16:37:37.300922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.674 [2024-10-17 16:37:37.308927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.674 [2024-10-17 16:37:37.308961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.674 [2024-10-17 16:37:37.316958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.674 [2024-10-17 16:37:37.316996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.674 [2024-10-17 16:37:37.324957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.674 [2024-10-17 16:37:37.324982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.674 [2024-10-17 16:37:37.332977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.674 [2024-10-17 16:37:37.333008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.674 [2024-10-17 16:37:37.340998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.674 [2024-10-17 16:37:37.341029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.674 [2024-10-17 16:37:37.349026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.674 [2024-10-17 16:37:37.349062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.674 [2024-10-17 16:37:37.357066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.674 [2024-10-17 16:37:37.357091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.365082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.365105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.367020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.934 [2024-10-17 16:37:37.373095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.373116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.381117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.381139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.389150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.389182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.397172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.397206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.405191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.405224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.413212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.413247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.421235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.421269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.429257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.429305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.437257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.437295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.445304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.445332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.453338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.453376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.461375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.461415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.469396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.469422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.477393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.477418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.485408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.485432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.493515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.493547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.501481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.501510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.509506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.509534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.517533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.517562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.525553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.525581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.533579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.533607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.541599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.541627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.549623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.549648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.557660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.557691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.565680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.565706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 Running I/O for 5 seconds... 00:09:23.934 [2024-10-17 16:37:37.578997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.579038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.934 [2024-10-17 16:37:37.590216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.934 [2024-10-17 16:37:37.590245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.935 [2024-10-17 16:37:37.601751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.935 [2024-10-17 16:37:37.601784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.935 [2024-10-17 16:37:37.613810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.935 [2024-10-17 16:37:37.613849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.193 [2024-10-17 16:37:37.626064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.193 [2024-10-17 16:37:37.626094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.193 [2024-10-17 16:37:37.637778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.193 [2024-10-17 16:37:37.637810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.193 [2024-10-17 16:37:37.649471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.649504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.660982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.661039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.672599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.672630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.684172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.684202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.695513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.695545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.707089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.707117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.718719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.718750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.730207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.730235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.742120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.742148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.753332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.753363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.765015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.765061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.776196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.776224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.787911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.787943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.799741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.799771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.811076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.811104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.822938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.822970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.834971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.835047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.846771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.846803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.859164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.859193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.870480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.870512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.194 [2024-10-17 16:37:37.881554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.194 [2024-10-17 16:37:37.881585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:37.893447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:37.893495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:37.904877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:37.904907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:37.916090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:37.916118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:37.927093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:37.927122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:37.938260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:37.938298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:37.950122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:37.950150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:37.961977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:37.962027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:37.974228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:37.974256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:37.985779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:37.985810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:37.997550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:37.997580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:38.009260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:38.009313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:38.020996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:38.021050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:38.032739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:38.032770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:38.044949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:38.044980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:38.056579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:38.056618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:38.068513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:38.068545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:38.080650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:38.080681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:38.091939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:38.091970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:38.103673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:38.103704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:38.115050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:38.115078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:38.126970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:38.127011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.454 [2024-10-17 16:37:38.138450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.454 [2024-10-17 16:37:38.138482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.715 [2024-10-17 16:37:38.149807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.715 [2024-10-17 16:37:38.149840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.715 [2024-10-17 16:37:38.161590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.715 [2024-10-17 16:37:38.161621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.715 [2024-10-17 16:37:38.173214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.715 [2024-10-17 16:37:38.173242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.715 [2024-10-17 16:37:38.184641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.715 [2024-10-17 16:37:38.184672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.715 [2024-10-17 16:37:38.196266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.715 [2024-10-17 16:37:38.196295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.715 [2024-10-17 16:37:38.207549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.715 [2024-10-17 16:37:38.207582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.219271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.219318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.230913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.230945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.242754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.242784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.254741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.254772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.266792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.266824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.278754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.278799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.292535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.292567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.303552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.303584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.315380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.315411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.326944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.326975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.338491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.338524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.350273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.350301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.362482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.362514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.374521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.374553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.386172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.386201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.716 [2024-10-17 16:37:38.397950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.716 [2024-10-17 16:37:38.397982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.976 [2024-10-17 16:37:38.409375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.976 [2024-10-17 16:37:38.409407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.976 [2024-10-17 16:37:38.420925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.976 [2024-10-17 16:37:38.420956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.976 [2024-10-17 16:37:38.432479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.976 [2024-10-17 16:37:38.432510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.976 [2024-10-17 16:37:38.444138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.976 [2024-10-17 16:37:38.444166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.976 [2024-10-17 16:37:38.456188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.976 [2024-10-17 16:37:38.456217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.976 [2024-10-17 16:37:38.468093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.976 [2024-10-17 16:37:38.468122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.481694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.481726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.492964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.492994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.504644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.504675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.516055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.516084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.527379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.527412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.540952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.540984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.551882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.551914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.563869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.563900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 10818.00 IOPS, 84.52 MiB/s [2024-10-17T14:37:38.667Z] [2024-10-17 16:37:38.575016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.575061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.586256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.586284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.597952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.597983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.609348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.609379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.620899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.620931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.632699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.632731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.644120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.644148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.977 [2024-10-17 16:37:38.655625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.977 [2024-10-17 16:37:38.655656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.667398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.667431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.679049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.679079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.690441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.690472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.702046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.702075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.713739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.713770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.725467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.725499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.737213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.737242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.750894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.750926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.761926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.761957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.773255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.773283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.787158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.787187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.798642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.798674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.810238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.810267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.821849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.821881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.833466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.833498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.847376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.847408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.858266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.858314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.869476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.869509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.883609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.883641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.894856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.894888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.906230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.906258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.237 [2024-10-17 16:37:38.920160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.237 [2024-10-17 16:37:38.920189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:38.931012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:38.931044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:38.942393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:38.942424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:38.953413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:38.953445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:38.964786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:38.964818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:38.976371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:38.976403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:38.987940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:38.987972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:38.999456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:38.999487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.010827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.010858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.022184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.022212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.033383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.033414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.045288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.045334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.057125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.057153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.070532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.070564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.081150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.081182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.092624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.092655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.104482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.104513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.117877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.117908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.128608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.128643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.139511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.139553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.152180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.152209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.161897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.161932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.172916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.172945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.497 [2024-10-17 16:37:39.185987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.497 [2024-10-17 16:37:39.186027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.197421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.197465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.206533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.206562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.218176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.218204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.230945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.230974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.240953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.240981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.251840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.251868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.262694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.262722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.273354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.273382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.283881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.283910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.294499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.294528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.305074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.305103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.315715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.315743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.326821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.326853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.338200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.338229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.349450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.349486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.361604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.361637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.373243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.373279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.384626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.384658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.396346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.396392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.407944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.407976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.419741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.419773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.431092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.431121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.756 [2024-10-17 16:37:39.442883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.756 [2024-10-17 16:37:39.442914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.454267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.454295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.465596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.465628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.477344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.477391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.488957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.488989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.503038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.503084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.514510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.514541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.526488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.526520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.538205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.538234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.549899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.549930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.563400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.563432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 10998.00 IOPS, 85.92 MiB/s [2024-10-17T14:37:39.707Z] [2024-10-17 16:37:39.574681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.574712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.586453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.586484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.597878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.597918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.609376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.609408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.622918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.622950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.634210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.634238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.645882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.645913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.657691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.657721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.669020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.669064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.680708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.680740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.691963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.691995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.017 [2024-10-17 16:37:39.703843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.017 [2024-10-17 16:37:39.703874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.715209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.715237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.726610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.726642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.738289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.738317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.750032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.750076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.763665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.763696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.774890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.774922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.786615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.786646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.798480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.798511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.809974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.810017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.821839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.821870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.833678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.833709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.845642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.845674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.857522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.857554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.869205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.869233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.880843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.880876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.891939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.891971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.903335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.903367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.916625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.916656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.927510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.927543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.939375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.939407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.950987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.951027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.278 [2024-10-17 16:37:39.964550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.278 [2024-10-17 16:37:39.964582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:39.975479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:39.975511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:39.986989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:39.987029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:39.998245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:39.998274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.010477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.010515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.022459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.022491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.033969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.034009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.046753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.046784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.056839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.056869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.067336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.067366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.077967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.077995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.089655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.089686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.101471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.101502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.113217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.113246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.124775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.124808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.136898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.136930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.149018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.149063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.160840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.160871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.174423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.174455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.185493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.185524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.196721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.196752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.207986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.208042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.538 [2024-10-17 16:37:40.219758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.538 [2024-10-17 16:37:40.219789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.231280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.231339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.243087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.243116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.254848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.254879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.266822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.266853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.280494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.280525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.291671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.291702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.302923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.302956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.314532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.314565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.325937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.325978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.337429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.337461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.348854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.348886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.360048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.360076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.370708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.370737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.381766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.381797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.394611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.394641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.405301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.405345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.416680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.416711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.428141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.428171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.439974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.440016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.451587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.451615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.463086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.463133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.797 [2024-10-17 16:37:40.474666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.797 [2024-10-17 16:37:40.474706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.798 [2024-10-17 16:37:40.486224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.798 [2024-10-17 16:37:40.486253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.497390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.497422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.508745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.508777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.520331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.520364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.531726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.531757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.543317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.543348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.554653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.554685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.565772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.565803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 11009.00 IOPS, 86.01 MiB/s [2024-10-17T14:37:40.748Z] [2024-10-17 16:37:40.577180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.577208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.589086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.589126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.601163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.601192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.612565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.612597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.624168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.624196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.635863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.635894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.647501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.647532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.658991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.659044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.670550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.670582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.682090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.682118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.693893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.693933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.705349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.705380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.716727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.716758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.728188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.728217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.058 [2024-10-17 16:37:40.741941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.058 [2024-10-17 16:37:40.741972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.319 [2024-10-17 16:37:40.752749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.319 [2024-10-17 16:37:40.752780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.319 [2024-10-17 16:37:40.764481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.319 [2024-10-17 16:37:40.764512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.319 [2024-10-17 16:37:40.776189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.319 [2024-10-17 16:37:40.776217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.319 [2024-10-17 16:37:40.787675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.319 [2024-10-17 16:37:40.787704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.319 [2024-10-17 16:37:40.799820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.319 [2024-10-17 16:37:40.799852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.319 [2024-10-17 16:37:40.811456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.319 [2024-10-17 16:37:40.811487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.319 [2024-10-17 16:37:40.823101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.319 [2024-10-17 16:37:40.823130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.319 [2024-10-17 16:37:40.836781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.319 [2024-10-17 16:37:40.836813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.319 [2024-10-17 16:37:40.847974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.319 [2024-10-17 16:37:40.848014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.319 [2024-10-17 16:37:40.859889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.319 [2024-10-17 16:37:40.859921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.319 [2024-10-17 16:37:40.871463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.320 [2024-10-17 16:37:40.871491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.320 [2024-10-17 16:37:40.883014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.320 [2024-10-17 16:37:40.883045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.320 [2024-10-17 16:37:40.894722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.320 [2024-10-17 16:37:40.894755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.320 [2024-10-17 16:37:40.906456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.320 [2024-10-17 16:37:40.906488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.320 [2024-10-17 16:37:40.918200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.320 [2024-10-17 16:37:40.918236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.320 [2024-10-17 16:37:40.929915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.320 [2024-10-17 16:37:40.929946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.320 [2024-10-17 16:37:40.941778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.320 [2024-10-17 16:37:40.941809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.320 [2024-10-17 16:37:40.953209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.320 [2024-10-17 16:37:40.953238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.320 [2024-10-17 16:37:40.964901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.320 [2024-10-17 16:37:40.964932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.320 [2024-10-17 16:37:40.976190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.320 [2024-10-17 16:37:40.976221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.320 [2024-10-17 16:37:40.987664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.320 [2024-10-17 16:37:40.987695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.320 [2024-10-17 16:37:40.999210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.320 [2024-10-17 16:37:40.999238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.010771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.010800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.021318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.021348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.034146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.034175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.045389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.045421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.056934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.056965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.068611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.068643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.080339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.080371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.092536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.092568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.104124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.104153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.115931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.115962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.127488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.127519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.138898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.138937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.152444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.152478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.163616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.163648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.175049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.175077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.186108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.186138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.197630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.197662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.209069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.209099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.220671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.220703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.234536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.234568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.245827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.245858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.257015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.257060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.581 [2024-10-17 16:37:41.268556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.581 [2024-10-17 16:37:41.268588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.280392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.280424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.292431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.292463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.306252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.306281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.317561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.317593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.329365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.329397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.340966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.340997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.352587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.352618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.364388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.364420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.376270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.376298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.389783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.389815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.400869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.400901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.411715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.411746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.423126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.423154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.436523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.436555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.447502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.447534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.458916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.458947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.472320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.472353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.482329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.482360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.494567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.494599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.506545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.506577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.518199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.518228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.842 [2024-10-17 16:37:41.529437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.842 [2024-10-17 16:37:41.529468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.541076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.541105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.552431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.552462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.563732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.563764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.576965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.576996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 10999.25 IOPS, 85.93 MiB/s [2024-10-17T14:37:41.793Z] [2024-10-17 16:37:41.588046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.588075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.599438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.599470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.611100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.611129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.622725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.622757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.634568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.634599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.645897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.645929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.657607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.657637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.670391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.670420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.680481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.680510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.690979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.691019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.704639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.704667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.716331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.716360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.725897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.725926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.737072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.737100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.747714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.747742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.758061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.758088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.768662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.768691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.781233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.781261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.103 [2024-10-17 16:37:41.791102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.103 [2024-10-17 16:37:41.791138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.801561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.801590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.811895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.811923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.822616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.822644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.833123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.833151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.843441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.843471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.854204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.854232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.865703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.865735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.876998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.877052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.888691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.888722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.900129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.900157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.912243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.912279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.924289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.924334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.936347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.936378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.947550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.947581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.959099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.959127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.970623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.970655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.981870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.981901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:41.993024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:41.993068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:42.004251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:42.004289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:42.015896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:42.015927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:42.027540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:42.027571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:42.039482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:42.039513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.364 [2024-10-17 16:37:42.051125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.364 [2024-10-17 16:37:42.051154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.624 [2024-10-17 16:37:42.062605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.624 [2024-10-17 16:37:42.062637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.624 [2024-10-17 16:37:42.074170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.624 [2024-10-17 16:37:42.074199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.624 [2024-10-17 16:37:42.085729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.624 [2024-10-17 16:37:42.085759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.624 [2024-10-17 16:37:42.097064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.624 [2024-10-17 16:37:42.097093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.624 [2024-10-17 16:37:42.108132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.624 [2024-10-17 16:37:42.108161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.624 [2024-10-17 16:37:42.121551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.624 [2024-10-17 16:37:42.121583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.624 [2024-10-17 16:37:42.132371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.624 [2024-10-17 16:37:42.132402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.624 [2024-10-17 16:37:42.143956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.624 [2024-10-17 16:37:42.143987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.624 [2024-10-17 16:37:42.155319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.155351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.166833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.166865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.178291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.178319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.190247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.190275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.201806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.201837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.213077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.213105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.224019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.224074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.235863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.235895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.247467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.247498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.260817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.260849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.271633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.271664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.283158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.283186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.294510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.294541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.625 [2024-10-17 16:37:42.306014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.625 [2024-10-17 16:37:42.306060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.883 [2024-10-17 16:37:42.317538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.883 [2024-10-17 16:37:42.317570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.883 [2024-10-17 16:37:42.329190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.883 [2024-10-17 16:37:42.329219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.883 [2024-10-17 16:37:42.341246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.883 [2024-10-17 16:37:42.341275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.883 [2024-10-17 16:37:42.352552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.883 [2024-10-17 16:37:42.352585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.883 [2024-10-17 16:37:42.363680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.883 [2024-10-17 16:37:42.363712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.883 [2024-10-17 16:37:42.375533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.883 [2024-10-17 16:37:42.375564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.883 [2024-10-17 16:37:42.387146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.883 [2024-10-17 16:37:42.387175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.883 [2024-10-17 16:37:42.400421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.883 [2024-10-17 16:37:42.400453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.883 [2024-10-17 16:37:42.410918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.883 [2024-10-17 16:37:42.410950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.883 [2024-10-17 16:37:42.423058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.883 [2024-10-17 16:37:42.423086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.883 [2024-10-17 16:37:42.434280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.884 [2024-10-17 16:37:42.434324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.884 [2024-10-17 16:37:42.445598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.884 [2024-10-17 16:37:42.445637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.884 [2024-10-17 16:37:42.456972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.884 [2024-10-17 16:37:42.457011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.884 [2024-10-17 16:37:42.470292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.884 [2024-10-17 16:37:42.470337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.884 [2024-10-17 16:37:42.481169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.884 [2024-10-17 16:37:42.481197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.884 [2024-10-17 16:37:42.492828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.884 [2024-10-17 16:37:42.492860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.884 [2024-10-17 16:37:42.504476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.884 [2024-10-17 16:37:42.504506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.884 [2024-10-17 16:37:42.516305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.884 [2024-10-17 16:37:42.516333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.884 [2024-10-17 16:37:42.527798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.884 [2024-10-17 16:37:42.527829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.884 [2024-10-17 16:37:42.540887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.884 [2024-10-17 16:37:42.540918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.884 [2024-10-17 16:37:42.552072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.884 [2024-10-17 16:37:42.552100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.884 [2024-10-17 16:37:42.563618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.884 [2024-10-17 16:37:42.563649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.142 [2024-10-17 16:37:42.577065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.142 [2024-10-17 16:37:42.577094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.142 11043.60 IOPS, 86.28 MiB/s [2024-10-17T14:37:42.832Z] [2024-10-17 16:37:42.586701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.142 [2024-10-17 16:37:42.586732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.142 00:09:29.142 Latency(us) 00:09:29.142 [2024-10-17T14:37:42.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.142 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:29.142 Nvme1n1 : 5.01 11049.68 86.33 0.00 0.00 11569.98 4490.43 20000.62 00:09:29.142 [2024-10-17T14:37:42.832Z] =================================================================================================================== 00:09:29.142 [2024-10-17T14:37:42.832Z] Total : 11049.68 86.33 0.00 0.00 11569.98 4490.43 20000.62 00:09:29.142 [2024-10-17 16:37:42.592995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.142 [2024-10-17 16:37:42.593047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.142 [2024-10-17 16:37:42.601026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.142 [2024-10-17 16:37:42.601067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.142 [2024-10-17 16:37:42.609052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.142 [2024-10-17 16:37:42.609075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.142 [2024-10-17 16:37:42.617114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.142 [2024-10-17 16:37:42.617161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.142 [2024-10-17 16:37:42.625119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.142 [2024-10-17 16:37:42.625176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.142 [2024-10-17 16:37:42.633150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.142 [2024-10-17 16:37:42.633195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.142 [2024-10-17 16:37:42.641182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.142 [2024-10-17 16:37:42.641227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.142 [2024-10-17 16:37:42.649198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.649244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.657228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.657277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.665236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.665284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.673263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.673310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.681282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.681327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.693339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.693398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.701342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.701390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.709362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.709409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.717380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.717425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.725403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.725449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.733422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.733465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.741428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.741454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.749445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.749470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.757469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.757494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.765492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.765519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.773538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.773578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.781580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.781628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.789573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.789614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.797582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.797608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.805602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.805628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 [2024-10-17 16:37:42.813625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.143 [2024-10-17 16:37:42.813649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2274624) - No such process 00:09:29.143 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2274624 00:09:29.143 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.143 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.143 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.143 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.143 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:29.143 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.143 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.401 delay0 00:09:29.401 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.401 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:29.401 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.401 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.401 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.401 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:29.401 [2024-10-17 16:37:42.936107] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:35.977 Initializing NVMe Controllers 00:09:35.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:35.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:35.977 Initialization complete. Launching workers. 00:09:35.977 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 114 00:09:35.977 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 401, failed to submit 33 00:09:35.977 success 250, unsuccessful 151, failed 0 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.977 rmmod nvme_tcp 00:09:35.977 rmmod nvme_fabrics 00:09:35.977 rmmod nvme_keyring 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2273280 ']' 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2273280 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2273280 ']' 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2273280 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2273280 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2273280' 00:09:35.977 killing process with pid 2273280 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2273280 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2273280 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.977 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.887 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:37.887 00:09:37.887 real 0m27.696s 00:09:37.887 user 0m40.894s 00:09:37.887 sys 0m8.188s 00:09:37.887 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.887 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.887 ************************************ 00:09:37.887 END TEST nvmf_zcopy 00:09:37.887 ************************************ 00:09:37.887 16:37:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:37.887 16:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.887 16:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.887 16:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.887 ************************************ 00:09:37.887 START TEST nvmf_nmic 00:09:37.887 ************************************ 00:09:37.887 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:38.146 * Looking for test storage... 00:09:38.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:38.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.146 --rc genhtml_branch_coverage=1 00:09:38.146 --rc genhtml_function_coverage=1 00:09:38.146 --rc genhtml_legend=1 00:09:38.146 --rc geninfo_all_blocks=1 00:09:38.146 --rc geninfo_unexecuted_blocks=1 00:09:38.146 00:09:38.146 ' 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:38.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.146 --rc genhtml_branch_coverage=1 00:09:38.146 --rc genhtml_function_coverage=1 00:09:38.146 --rc genhtml_legend=1 00:09:38.146 --rc geninfo_all_blocks=1 00:09:38.146 --rc geninfo_unexecuted_blocks=1 00:09:38.146 00:09:38.146 ' 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:38.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.146 --rc genhtml_branch_coverage=1 00:09:38.146 --rc genhtml_function_coverage=1 00:09:38.146 --rc genhtml_legend=1 00:09:38.146 --rc geninfo_all_blocks=1 00:09:38.146 --rc geninfo_unexecuted_blocks=1 00:09:38.146 00:09:38.146 ' 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:38.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.146 --rc genhtml_branch_coverage=1 00:09:38.146 --rc genhtml_function_coverage=1 00:09:38.146 --rc genhtml_legend=1 00:09:38.146 --rc geninfo_all_blocks=1 00:09:38.146 --rc geninfo_unexecuted_blocks=1 00:09:38.146 00:09:38.146 ' 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.146 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:38.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:38.147 16:37:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:40.054 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:40.054 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.054 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:40.055 Found net devices under 0000:09:00.0: cvl_0_0 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:40.055 Found net devices under 0000:09:00.1: cvl_0_1 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:40.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:09:40.055 00:09:40.055 --- 10.0.0.2 ping statistics --- 00:09:40.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.055 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:09:40.055 00:09:40.055 --- 10.0.0.1 ping statistics --- 00:09:40.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.055 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:40.055 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2277900 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2277900 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2277900 ']' 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.314 16:37:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.314 [2024-10-17 16:37:53.815390] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:09:40.314 [2024-10-17 16:37:53.815496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.314 [2024-10-17 16:37:53.884045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.314 [2024-10-17 16:37:53.948212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.314 [2024-10-17 16:37:53.948273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.314 [2024-10-17 16:37:53.948300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.314 [2024-10-17 16:37:53.948314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.314 [2024-10-17 16:37:53.948325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.314 [2024-10-17 16:37:53.950028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.314 [2024-10-17 16:37:53.950074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.314 [2024-10-17 16:37:53.950166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.314 [2024-10-17 16:37:53.950170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.573 [2024-10-17 16:37:54.089853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.573 Malloc0 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.573 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.574 [2024-10-17 16:37:54.152010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:40.574 test case1: single bdev can't be used in multiple subsystems 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.574 [2024-10-17 16:37:54.175823] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:40.574 [2024-10-17 16:37:54.175853] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:40.574 [2024-10-17 16:37:54.175867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 request: 00:09:40.574 { 00:09:40.574 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:40.574 "namespace": { 00:09:40.574 "bdev_name": "Malloc0", 00:09:40.574 "no_auto_visible": false 00:09:40.574 }, 00:09:40.574 "method": "nvmf_subsystem_add_ns", 00:09:40.574 "req_id": 1 00:09:40.574 } 00:09:40.574 Got JSON-RPC error response 00:09:40.574 response: 00:09:40.574 { 00:09:40.574 "code": -32602, 00:09:40.574 "message": "Invalid parameters" 00:09:40.574 } 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:40.574 Adding namespace failed - expected result. 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:40.574 test case2: host connect to nvmf target in multiple paths 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.574 [2024-10-17 16:37:54.183947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.574 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:41.145 16:37:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:42.084 16:37:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:42.084 16:37:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:42.084 16:37:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.084 16:37:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:42.084 16:37:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:44.053 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:44.053 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:44.053 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.053 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:44.053 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.053 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:44.053 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:44.053 [global] 00:09:44.053 thread=1 00:09:44.053 invalidate=1 00:09:44.053 rw=write 00:09:44.053 time_based=1 00:09:44.053 runtime=1 00:09:44.053 ioengine=libaio 00:09:44.053 direct=1 00:09:44.053 bs=4096 00:09:44.053 iodepth=1 00:09:44.053 norandommap=0 00:09:44.053 numjobs=1 00:09:44.053 00:09:44.053 verify_dump=1 00:09:44.053 verify_backlog=512 00:09:44.053 verify_state_save=0 00:09:44.053 do_verify=1 00:09:44.053 verify=crc32c-intel 00:09:44.053 [job0] 00:09:44.053 filename=/dev/nvme0n1 00:09:44.053 Could not set queue depth (nvme0n1) 00:09:44.053 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.053 fio-3.35 00:09:44.053 Starting 1 thread 00:09:45.428 00:09:45.428 job0: (groupid=0, jobs=1): err= 0: pid=2278537: Thu Oct 17 16:37:58 2024 00:09:45.428 read: IOPS=22, BW=89.5KiB/s (91.6kB/s)(92.0KiB/1028msec) 00:09:45.428 slat (nsec): min=7588, max=33679, avg=22317.96, stdev=9508.05 00:09:45.428 clat (usec): min=198, max=41133, avg=39203.21, stdev=8502.79 00:09:45.428 lat (usec): min=207, max=41141, avg=39225.53, stdev=8505.77 00:09:45.428 clat percentiles (usec): 00:09:45.428 | 1.00th=[ 200], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:45.428 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:45.428 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:45.428 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:45.428 | 99.99th=[41157] 00:09:45.428 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:09:45.428 slat (usec): min=7, max=27934, avg=71.18, stdev=1233.82 00:09:45.428 clat (usec): min=124, max=650, avg=169.33, stdev=44.76 00:09:45.428 lat (usec): min=131, max=28584, avg=240.51, stdev=1255.84 00:09:45.428 clat percentiles (usec): 00:09:45.428 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:09:45.428 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 161], 00:09:45.428 | 70.00th=[ 174], 80.00th=[ 194], 90.00th=[ 223], 95.00th=[ 255], 00:09:45.428 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 652], 99.95th=[ 652], 00:09:45.428 | 99.99th=[ 652] 00:09:45.428 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.428 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.428 lat (usec) : 250=90.09%, 500=5.61%, 750=0.19% 00:09:45.428 lat (msec) : 50=4.11% 00:09:45.428 cpu : usr=0.39%, sys=0.88%, ctx=537, majf=0, minf=1 00:09:45.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.428 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.428 00:09:45.428 Run status group 0 (all jobs): 00:09:45.428 READ: bw=89.5KiB/s (91.6kB/s), 89.5KiB/s-89.5KiB/s (91.6kB/s-91.6kB/s), io=92.0KiB (94.2kB), run=1028-1028msec 00:09:45.428 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:09:45.428 00:09:45.428 Disk stats (read/write): 00:09:45.428 nvme0n1: ios=45/512, merge=0/0, ticks=1723/84, in_queue=1807, util=98.60% 00:09:45.428 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.428 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.428 rmmod nvme_tcp 00:09:45.428 rmmod nvme_fabrics 00:09:45.429 rmmod nvme_keyring 00:09:45.429 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.429 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:45.429 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:45.429 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2277900 ']' 00:09:45.429 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2277900 00:09:45.429 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2277900 ']' 00:09:45.429 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2277900 00:09:45.429 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:45.429 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.429 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2277900 00:09:45.686 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.686 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.686 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2277900' 00:09:45.686 killing process with pid 2277900 00:09:45.687 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2277900 00:09:45.687 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2277900 00:09:45.946 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:45.946 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:45.946 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:45.946 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:45.946 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:45.946 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:45.946 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:45.946 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.946 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.946 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.947 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.947 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.906 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.906 00:09:47.906 real 0m9.896s 00:09:47.906 user 0m22.598s 00:09:47.906 sys 0m2.304s 00:09:47.906 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.906 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:47.906 ************************************ 00:09:47.906 END TEST nvmf_nmic 00:09:47.906 ************************************ 00:09:47.906 16:38:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:47.906 16:38:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:47.906 16:38:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.906 16:38:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.906 ************************************ 00:09:47.906 START TEST nvmf_fio_target 00:09:47.906 ************************************ 00:09:47.906 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:47.906 * Looking for test storage... 00:09:47.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.906 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:47.906 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:47.906 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.164 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:48.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.165 --rc genhtml_branch_coverage=1 00:09:48.165 --rc genhtml_function_coverage=1 00:09:48.165 --rc genhtml_legend=1 00:09:48.165 --rc geninfo_all_blocks=1 00:09:48.165 --rc geninfo_unexecuted_blocks=1 00:09:48.165 00:09:48.165 ' 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:48.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.165 --rc genhtml_branch_coverage=1 00:09:48.165 --rc genhtml_function_coverage=1 00:09:48.165 --rc genhtml_legend=1 00:09:48.165 --rc geninfo_all_blocks=1 00:09:48.165 --rc geninfo_unexecuted_blocks=1 00:09:48.165 00:09:48.165 ' 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:48.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.165 --rc genhtml_branch_coverage=1 00:09:48.165 --rc genhtml_function_coverage=1 00:09:48.165 --rc genhtml_legend=1 00:09:48.165 --rc geninfo_all_blocks=1 00:09:48.165 --rc geninfo_unexecuted_blocks=1 00:09:48.165 00:09:48.165 ' 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:48.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.165 --rc genhtml_branch_coverage=1 00:09:48.165 --rc genhtml_function_coverage=1 00:09:48.165 --rc genhtml_legend=1 00:09:48.165 --rc geninfo_all_blocks=1 00:09:48.165 --rc geninfo_unexecuted_blocks=1 00:09:48.165 00:09:48.165 ' 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.165 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:48.166 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:50.073 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.073 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:50.074 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:50.074 Found net devices under 0000:09:00.0: cvl_0_0 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:50.074 Found net devices under 0000:09:00.1: cvl_0_1 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:50.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:09:50.074 00:09:50.074 --- 10.0.0.2 ping statistics --- 00:09:50.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.074 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:09:50.074 00:09:50.074 --- 10.0.0.1 ping statistics --- 00:09:50.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.074 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2280630 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2280630 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2280630 ']' 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.074 16:38:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.332 [2024-10-17 16:38:03.814458] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:09:50.332 [2024-10-17 16:38:03.814543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.332 [2024-10-17 16:38:03.887382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.332 [2024-10-17 16:38:03.950476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.332 [2024-10-17 16:38:03.950538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.332 [2024-10-17 16:38:03.950554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.332 [2024-10-17 16:38:03.950568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.332 [2024-10-17 16:38:03.950580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.332 [2024-10-17 16:38:03.952227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.332 [2024-10-17 16:38:03.952285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.332 [2024-10-17 16:38:03.952369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.332 [2024-10-17 16:38:03.952386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.590 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.590 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:50.590 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:50.590 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:50.590 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.590 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.590 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:50.848 [2024-10-17 16:38:04.409045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.848 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.106 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:51.106 16:38:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.364 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:51.364 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.623 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:51.881 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.139 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:52.139 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:52.397 16:38:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.655 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:52.655 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.912 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:52.912 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.170 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:53.170 16:38:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:53.428 16:38:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:53.686 16:38:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:53.686 16:38:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.944 16:38:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:53.944 16:38:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:54.202 16:38:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.460 [2024-10-17 16:38:08.086833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.460 16:38:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:54.718 16:38:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:54.975 16:38:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:55.909 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:55.909 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:55.909 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.909 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:55.909 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:55.909 16:38:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:57.807 16:38:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:57.807 16:38:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:57.807 16:38:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.807 16:38:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:57.807 16:38:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.807 16:38:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:57.807 16:38:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:57.807 [global] 00:09:57.807 thread=1 00:09:57.807 invalidate=1 00:09:57.807 rw=write 00:09:57.807 time_based=1 00:09:57.807 runtime=1 00:09:57.807 ioengine=libaio 00:09:57.807 direct=1 00:09:57.807 bs=4096 00:09:57.807 iodepth=1 00:09:57.807 norandommap=0 00:09:57.807 numjobs=1 00:09:57.807 00:09:57.807 verify_dump=1 00:09:57.807 verify_backlog=512 00:09:57.807 verify_state_save=0 00:09:57.807 do_verify=1 00:09:57.807 verify=crc32c-intel 00:09:57.807 [job0] 00:09:57.807 filename=/dev/nvme0n1 00:09:57.807 [job1] 00:09:57.807 filename=/dev/nvme0n2 00:09:57.807 [job2] 00:09:57.807 filename=/dev/nvme0n3 00:09:57.807 [job3] 00:09:57.807 filename=/dev/nvme0n4 00:09:57.807 Could not set queue depth (nvme0n1) 00:09:57.807 Could not set queue depth (nvme0n2) 00:09:57.807 Could not set queue depth (nvme0n3) 00:09:57.807 Could not set queue depth (nvme0n4) 00:09:57.807 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.807 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.807 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.807 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.807 fio-3.35 00:09:57.807 Starting 4 threads 00:09:59.182 00:09:59.182 job0: (groupid=0, jobs=1): err= 0: pid=2281715: Thu Oct 17 16:38:12 2024 00:09:59.182 read: IOPS=1332, BW=5328KiB/s (5456kB/s)(5456KiB/1024msec) 00:09:59.182 slat (nsec): min=5468, max=49529, avg=10300.91, stdev=5497.41 00:09:59.182 clat (usec): min=174, max=41173, avg=492.44, stdev=3295.06 00:09:59.182 lat (usec): min=180, max=41183, avg=502.74, stdev=3295.03 00:09:59.182 clat percentiles (usec): 00:09:59.182 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:09:59.182 | 30.00th=[ 208], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:09:59.182 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 260], 00:09:59.182 | 99.00th=[ 285], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:59.182 | 99.99th=[41157] 00:09:59.182 write: IOPS=1500, BW=6000KiB/s (6144kB/s)(6144KiB/1024msec); 0 zone resets 00:09:59.182 slat (nsec): min=7515, max=87856, avg=17906.51, stdev=10020.23 00:09:59.182 clat (usec): min=133, max=693, avg=194.67, stdev=58.23 00:09:59.182 lat (usec): min=142, max=702, avg=212.57, stdev=62.06 00:09:59.182 clat percentiles (usec): 00:09:59.182 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:09:59.182 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 184], 00:09:59.182 | 70.00th=[ 198], 80.00th=[ 231], 90.00th=[ 273], 95.00th=[ 318], 00:09:59.182 | 99.00th=[ 400], 99.50th=[ 433], 99.90th=[ 453], 99.95th=[ 693], 00:09:59.182 | 99.99th=[ 693] 00:09:59.182 bw ( KiB/s): min= 4096, max= 8192, per=44.22%, avg=6144.00, stdev=2896.31, samples=2 00:09:59.182 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:59.182 lat (usec) : 250=86.48%, 500=13.14%, 750=0.07% 00:09:59.182 lat (msec) : 50=0.31% 00:09:59.182 cpu : usr=3.23%, sys=4.99%, ctx=2902, majf=0, minf=1 00:09:59.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.182 issued rwts: total=1364,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.182 job1: (groupid=0, jobs=1): err= 0: pid=2281716: Thu Oct 17 16:38:12 2024 00:09:59.182 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:59.182 slat (nsec): min=5312, max=55488, avg=9936.28, stdev=5903.01 00:09:59.182 clat (usec): min=169, max=41165, avg=743.36, stdev=4557.85 00:09:59.182 lat (usec): min=174, max=41180, avg=753.29, stdev=4558.26 00:09:59.182 clat percentiles (usec): 00:09:59.182 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:09:59.182 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 225], 60.00th=[ 235], 00:09:59.182 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 273], 00:09:59.182 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:59.182 | 99.99th=[41157] 00:09:59.182 write: IOPS=1040, BW=4164KiB/s (4264kB/s)(4168KiB/1001msec); 0 zone resets 00:09:59.182 slat (nsec): min=6782, max=58436, avg=17288.57, stdev=7326.35 00:09:59.182 clat (usec): min=122, max=348, avg=194.46, stdev=29.08 00:09:59.182 lat (usec): min=130, max=381, avg=211.75, stdev=30.97 00:09:59.182 clat percentiles (usec): 00:09:59.182 | 1.00th=[ 137], 5.00th=[ 151], 10.00th=[ 163], 20.00th=[ 172], 00:09:59.182 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 202], 00:09:59.182 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 243], 00:09:59.182 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 330], 99.95th=[ 351], 00:09:59.182 | 99.99th=[ 351] 00:09:59.182 bw ( KiB/s): min= 4096, max= 4096, per=29.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.182 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.182 lat (usec) : 250=88.53%, 500=10.60%, 750=0.24% 00:09:59.182 lat (msec) : 50=0.63% 00:09:59.182 cpu : usr=2.50%, sys=3.40%, ctx=2066, majf=0, minf=2 00:09:59.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.182 issued rwts: total=1024,1042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.182 job2: (groupid=0, jobs=1): err= 0: pid=2281717: Thu Oct 17 16:38:12 2024 00:09:59.182 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:09:59.182 slat (nsec): min=6385, max=35570, avg=15504.95, stdev=5280.66 00:09:59.182 clat (usec): min=262, max=42082, avg=39960.79, stdev=8871.61 00:09:59.182 lat (usec): min=280, max=42097, avg=39976.29, stdev=8871.16 00:09:59.182 clat percentiles (usec): 00:09:59.182 | 1.00th=[ 265], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:59.182 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:59.182 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:59.182 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:59.182 | 99.99th=[42206] 00:09:59.182 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:09:59.182 slat (nsec): min=6376, max=75810, avg=21014.11, stdev=12421.03 00:09:59.182 clat (usec): min=169, max=519, avg=255.66, stdev=61.68 00:09:59.182 lat (usec): min=181, max=564, avg=276.67, stdev=63.98 00:09:59.182 clat percentiles (usec): 00:09:59.182 | 1.00th=[ 180], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 206], 00:09:59.182 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 253], 00:09:59.182 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 347], 95.00th=[ 379], 00:09:59.182 | 99.00th=[ 420], 99.50th=[ 453], 99.90th=[ 519], 99.95th=[ 519], 00:09:59.182 | 99.99th=[ 519] 00:09:59.182 bw ( KiB/s): min= 4096, max= 4096, per=29.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.182 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.182 lat (usec) : 250=56.74%, 500=39.14%, 750=0.19% 00:09:59.182 lat (msec) : 50=3.93% 00:09:59.182 cpu : usr=0.78%, sys=0.78%, ctx=535, majf=0, minf=1 00:09:59.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.182 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.182 job3: (groupid=0, jobs=1): err= 0: pid=2281718: Thu Oct 17 16:38:12 2024 00:09:59.182 read: IOPS=21, BW=84.9KiB/s (86.9kB/s)(88.0KiB/1037msec) 00:09:59.182 slat (nsec): min=10593, max=19242, avg=14864.55, stdev=2475.92 00:09:59.182 clat (usec): min=40545, max=41035, avg=40962.82, stdev=94.67 00:09:59.182 lat (usec): min=40555, max=41047, avg=40977.68, stdev=95.62 00:09:59.182 clat percentiles (usec): 00:09:59.182 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:59.182 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:59.182 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:59.182 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:59.182 | 99.99th=[41157] 00:09:59.182 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:09:59.182 slat (usec): min=8, max=24927, avg=65.11, stdev=1100.95 00:09:59.182 clat (usec): min=166, max=395, avg=195.05, stdev=16.44 00:09:59.182 lat (usec): min=182, max=25323, avg=260.15, stdev=1109.93 00:09:59.182 clat percentiles (usec): 00:09:59.182 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:09:59.182 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 196], 00:09:59.182 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 215], 00:09:59.182 | 99.00th=[ 235], 99.50th=[ 297], 99.90th=[ 396], 99.95th=[ 396], 00:09:59.182 | 99.99th=[ 396] 00:09:59.182 bw ( KiB/s): min= 4096, max= 4096, per=29.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.182 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.182 lat (usec) : 250=95.13%, 500=0.75% 00:09:59.182 lat (msec) : 50=4.12% 00:09:59.182 cpu : usr=0.77%, sys=0.39%, ctx=536, majf=0, minf=1 00:09:59.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.182 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.182 00:09:59.182 Run status group 0 (all jobs): 00:09:59.182 READ: bw=9381KiB/s (9606kB/s), 84.9KiB/s-5328KiB/s (86.9kB/s-5456kB/s), io=9728KiB (9961kB), run=1001-1037msec 00:09:59.182 WRITE: bw=13.6MiB/s (14.2MB/s), 1975KiB/s-6000KiB/s (2022kB/s-6144kB/s), io=14.1MiB (14.8MB), run=1001-1037msec 00:09:59.182 00:09:59.182 Disk stats (read/write): 00:09:59.182 nvme0n1: ios=1050/1516, merge=0/0, ticks=1481/262, in_queue=1743, util=98.00% 00:09:59.182 nvme0n2: ios=827/1024, merge=0/0, ticks=589/177, in_queue=766, util=86.98% 00:09:59.182 nvme0n3: ios=41/512, merge=0/0, ticks=1664/128, in_queue=1792, util=98.54% 00:09:59.182 nvme0n4: ios=40/512, merge=0/0, ticks=1618/100, in_queue=1718, util=98.53% 00:09:59.182 16:38:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:59.182 [global] 00:09:59.182 thread=1 00:09:59.182 invalidate=1 00:09:59.182 rw=randwrite 00:09:59.182 time_based=1 00:09:59.182 runtime=1 00:09:59.182 ioengine=libaio 00:09:59.182 direct=1 00:09:59.182 bs=4096 00:09:59.182 iodepth=1 00:09:59.182 norandommap=0 00:09:59.182 numjobs=1 00:09:59.182 00:09:59.182 verify_dump=1 00:09:59.182 verify_backlog=512 00:09:59.182 verify_state_save=0 00:09:59.182 do_verify=1 00:09:59.182 verify=crc32c-intel 00:09:59.182 [job0] 00:09:59.182 filename=/dev/nvme0n1 00:09:59.182 [job1] 00:09:59.182 filename=/dev/nvme0n2 00:09:59.182 [job2] 00:09:59.182 filename=/dev/nvme0n3 00:09:59.182 [job3] 00:09:59.183 filename=/dev/nvme0n4 00:09:59.183 Could not set queue depth (nvme0n1) 00:09:59.183 Could not set queue depth (nvme0n2) 00:09:59.183 Could not set queue depth (nvme0n3) 00:09:59.183 Could not set queue depth (nvme0n4) 00:09:59.440 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.441 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.441 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.441 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.441 fio-3.35 00:09:59.441 Starting 4 threads 00:10:00.823 00:10:00.823 job0: (groupid=0, jobs=1): err= 0: pid=2281942: Thu Oct 17 16:38:14 2024 00:10:00.823 read: IOPS=22, BW=88.7KiB/s (90.8kB/s)(92.0KiB/1037msec) 00:10:00.823 slat (nsec): min=16605, max=50019, avg=31242.48, stdev=8310.02 00:10:00.823 clat (usec): min=282, max=42001, avg=39532.18, stdev=8568.89 00:10:00.823 lat (usec): min=317, max=42036, avg=39563.43, stdev=8568.17 00:10:00.823 clat percentiles (usec): 00:10:00.823 | 1.00th=[ 281], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:00.823 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:00.823 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:00.823 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:00.823 | 99.99th=[42206] 00:10:00.823 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:10:00.823 slat (nsec): min=6250, max=43739, avg=10516.30, stdev=5900.28 00:10:00.823 clat (usec): min=152, max=445, avg=234.42, stdev=33.16 00:10:00.823 lat (usec): min=162, max=452, avg=244.94, stdev=31.50 00:10:00.823 clat percentiles (usec): 00:10:00.823 | 1.00th=[ 161], 5.00th=[ 176], 10.00th=[ 188], 20.00th=[ 219], 00:10:00.823 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:10:00.823 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 273], 00:10:00.823 | 99.00th=[ 375], 99.50th=[ 379], 99.90th=[ 445], 99.95th=[ 445], 00:10:00.823 | 99.99th=[ 445] 00:10:00.823 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:10:00.823 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:00.823 lat (usec) : 250=80.00%, 500=15.89% 00:10:00.823 lat (msec) : 50=4.11% 00:10:00.823 cpu : usr=0.00%, sys=0.87%, ctx=536, majf=0, minf=1 00:10:00.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.823 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.823 job1: (groupid=0, jobs=1): err= 0: pid=2281947: Thu Oct 17 16:38:14 2024 00:10:00.823 read: IOPS=396, BW=1584KiB/s (1622kB/s)(1632KiB/1030msec) 00:10:00.823 slat (nsec): min=5305, max=62246, avg=10705.36, stdev=7630.00 00:10:00.823 clat (usec): min=168, max=42023, avg=2193.71, stdev=8756.45 00:10:00.823 lat (usec): min=184, max=42040, avg=2204.42, stdev=8761.11 00:10:00.823 clat percentiles (usec): 00:10:00.823 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 200], 20.00th=[ 215], 00:10:00.823 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 262], 60.00th=[ 273], 00:10:00.823 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 355], 95.00th=[ 635], 00:10:00.823 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:00.823 | 99.99th=[42206] 00:10:00.823 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:10:00.823 slat (nsec): min=6218, max=31611, avg=9554.37, stdev=4336.63 00:10:00.823 clat (usec): min=153, max=463, avg=239.77, stdev=24.08 00:10:00.823 lat (usec): min=169, max=470, avg=249.32, stdev=23.89 00:10:00.823 clat percentiles (usec): 00:10:00.823 | 1.00th=[ 180], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 229], 00:10:00.823 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 243], 00:10:00.823 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 249], 95.00th=[ 253], 00:10:00.823 | 99.00th=[ 383], 99.50th=[ 429], 99.90th=[ 465], 99.95th=[ 465], 00:10:00.823 | 99.99th=[ 465] 00:10:00.823 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:10:00.823 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:00.823 lat (usec) : 250=71.09%, 500=25.98%, 750=0.87% 00:10:00.823 lat (msec) : 50=2.07% 00:10:00.823 cpu : usr=0.49%, sys=0.87%, ctx=921, majf=0, minf=1 00:10:00.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.823 issued rwts: total=408,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.823 job2: (groupid=0, jobs=1): err= 0: pid=2281948: Thu Oct 17 16:38:14 2024 00:10:00.823 read: IOPS=27, BW=112KiB/s (115kB/s)(116KiB/1037msec) 00:10:00.823 slat (nsec): min=14802, max=35551, avg=25949.34, stdev=8899.89 00:10:00.823 clat (usec): min=304, max=42041, avg=31413.60, stdev=17822.64 00:10:00.823 lat (usec): min=321, max=42076, avg=31439.55, stdev=17826.84 00:10:00.823 clat percentiles (usec): 00:10:00.823 | 1.00th=[ 306], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 437], 00:10:00.823 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:00.823 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:00.823 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:00.823 | 99.99th=[42206] 00:10:00.823 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:10:00.823 slat (nsec): min=7828, max=48788, avg=11597.02, stdev=6254.03 00:10:00.823 clat (usec): min=157, max=470, avg=229.42, stdev=43.48 00:10:00.823 lat (usec): min=165, max=508, avg=241.02, stdev=45.96 00:10:00.823 clat percentiles (usec): 00:10:00.823 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 192], 00:10:00.823 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:10:00.823 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 251], 95.00th=[ 262], 00:10:00.823 | 99.00th=[ 429], 99.50th=[ 457], 99.90th=[ 469], 99.95th=[ 469], 00:10:00.823 | 99.99th=[ 469] 00:10:00.823 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:10:00.823 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:00.823 lat (usec) : 250=84.10%, 500=11.83% 00:10:00.823 lat (msec) : 50=4.07% 00:10:00.823 cpu : usr=0.77%, sys=0.48%, ctx=541, majf=0, minf=2 00:10:00.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.823 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.823 job3: (groupid=0, jobs=1): err= 0: pid=2281949: Thu Oct 17 16:38:14 2024 00:10:00.823 read: IOPS=23, BW=92.6KiB/s (94.8kB/s)(96.0KiB/1037msec) 00:10:00.823 slat (nsec): min=9166, max=48063, avg=28376.29, stdev=9826.60 00:10:00.823 clat (usec): min=358, max=42373, avg=37814.37, stdev=11527.34 00:10:00.823 lat (usec): min=377, max=42389, avg=37842.74, stdev=11531.62 00:10:00.823 clat percentiles (usec): 00:10:00.823 | 1.00th=[ 359], 5.00th=[ 474], 10.00th=[41157], 20.00th=[41157], 00:10:00.823 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:00.823 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:00.823 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:00.823 | 99.99th=[42206] 00:10:00.823 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:10:00.823 slat (nsec): min=6013, max=61411, avg=9773.85, stdev=5374.15 00:10:00.823 clat (usec): min=158, max=448, avg=238.85, stdev=29.26 00:10:00.823 lat (usec): min=175, max=465, avg=248.62, stdev=29.62 00:10:00.823 clat percentiles (usec): 00:10:00.823 | 1.00th=[ 184], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 225], 00:10:00.823 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 235], 60.00th=[ 239], 00:10:00.823 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 269], 00:10:00.823 | 99.00th=[ 379], 99.50th=[ 412], 99.90th=[ 449], 99.95th=[ 449], 00:10:00.823 | 99.99th=[ 449] 00:10:00.823 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:10:00.823 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:00.823 lat (usec) : 250=75.93%, 500=19.96% 00:10:00.823 lat (msec) : 50=4.10% 00:10:00.823 cpu : usr=0.29%, sys=0.48%, ctx=536, majf=0, minf=1 00:10:00.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.823 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.823 00:10:00.823 Run status group 0 (all jobs): 00:10:00.823 READ: bw=1867KiB/s (1912kB/s), 88.7KiB/s-1584KiB/s (90.8kB/s-1622kB/s), io=1936KiB (1982kB), run=1030-1037msec 00:10:00.823 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-1988KiB/s (2022kB/s-2036kB/s), io=8192KiB (8389kB), run=1030-1037msec 00:10:00.823 00:10:00.823 Disk stats (read/write): 00:10:00.823 nvme0n1: ios=70/512, merge=0/0, ticks=1584/112, in_queue=1696, util=97.90% 00:10:00.823 nvme0n2: ios=440/512, merge=0/0, ticks=1562/120, in_queue=1682, util=96.34% 00:10:00.823 nvme0n3: ios=24/512, merge=0/0, ticks=701/115, in_queue=816, util=88.94% 00:10:00.823 nvme0n4: ios=19/512, merge=0/0, ticks=704/111, in_queue=815, util=89.59% 00:10:00.823 16:38:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:00.823 [global] 00:10:00.824 thread=1 00:10:00.824 invalidate=1 00:10:00.824 rw=write 00:10:00.824 time_based=1 00:10:00.824 runtime=1 00:10:00.824 ioengine=libaio 00:10:00.824 direct=1 00:10:00.824 bs=4096 00:10:00.824 iodepth=128 00:10:00.824 norandommap=0 00:10:00.824 numjobs=1 00:10:00.824 00:10:00.824 verify_dump=1 00:10:00.824 verify_backlog=512 00:10:00.824 verify_state_save=0 00:10:00.824 do_verify=1 00:10:00.824 verify=crc32c-intel 00:10:00.824 [job0] 00:10:00.824 filename=/dev/nvme0n1 00:10:00.824 [job1] 00:10:00.824 filename=/dev/nvme0n2 00:10:00.824 [job2] 00:10:00.824 filename=/dev/nvme0n3 00:10:00.824 [job3] 00:10:00.824 filename=/dev/nvme0n4 00:10:00.824 Could not set queue depth (nvme0n1) 00:10:00.824 Could not set queue depth (nvme0n2) 00:10:00.824 Could not set queue depth (nvme0n3) 00:10:00.824 Could not set queue depth (nvme0n4) 00:10:00.824 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.824 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.824 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.824 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.824 fio-3.35 00:10:00.824 Starting 4 threads 00:10:02.204 00:10:02.204 job0: (groupid=0, jobs=1): err= 0: pid=2282181: Thu Oct 17 16:38:15 2024 00:10:02.204 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:10:02.204 slat (usec): min=2, max=14359, avg=100.21, stdev=696.46 00:10:02.204 clat (usec): min=4690, max=46091, avg=12981.37, stdev=4100.91 00:10:02.204 lat (usec): min=5055, max=46108, avg=13081.57, stdev=4157.53 00:10:02.204 clat percentiles (usec): 00:10:02.204 | 1.00th=[ 8356], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10945], 00:10:02.204 | 30.00th=[11076], 40.00th=[11338], 50.00th=[12125], 60.00th=[12518], 00:10:02.204 | 70.00th=[13304], 80.00th=[14353], 90.00th=[16450], 95.00th=[20055], 00:10:02.204 | 99.00th=[33817], 99.50th=[40109], 99.90th=[45876], 99.95th=[45876], 00:10:02.204 | 99.99th=[45876] 00:10:02.204 write: IOPS=4829, BW=18.9MiB/s (19.8MB/s)(19.1MiB/1010msec); 0 zone resets 00:10:02.204 slat (usec): min=3, max=12355, avg=90.68, stdev=600.55 00:10:02.204 clat (usec): min=4324, max=70956, avg=13563.23, stdev=8762.49 00:10:02.204 lat (usec): min=4334, max=70979, avg=13653.91, stdev=8828.37 00:10:02.204 clat percentiles (usec): 00:10:02.204 | 1.00th=[ 5080], 5.00th=[ 7046], 10.00th=[ 7635], 20.00th=[ 9765], 00:10:02.204 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11469], 60.00th=[11863], 00:10:02.204 | 70.00th=[12256], 80.00th=[13173], 90.00th=[18744], 95.00th=[33817], 00:10:02.204 | 99.00th=[56361], 99.50th=[59507], 99.90th=[68682], 99.95th=[70779], 00:10:02.204 | 99.99th=[70779] 00:10:02.204 bw ( KiB/s): min=17520, max=20480, per=29.43%, avg=19000.00, stdev=2093.04, samples=2 00:10:02.204 iops : min= 4380, max= 5120, avg=4750.00, stdev=523.26, samples=2 00:10:02.204 lat (msec) : 10=16.31%, 20=76.33%, 50=6.79%, 100=0.57% 00:10:02.204 cpu : usr=8.13%, sys=9.12%, ctx=389, majf=0, minf=1 00:10:02.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:02.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.204 issued rwts: total=4608,4878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.204 job1: (groupid=0, jobs=1): err= 0: pid=2282182: Thu Oct 17 16:38:15 2024 00:10:02.204 read: IOPS=3872, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1004msec) 00:10:02.204 slat (usec): min=3, max=11880, avg=113.64, stdev=675.12 00:10:02.204 clat (usec): min=964, max=43755, avg=13809.43, stdev=5450.58 00:10:02.204 lat (usec): min=3463, max=43773, avg=13923.06, stdev=5499.53 00:10:02.204 clat percentiles (usec): 00:10:02.204 | 1.00th=[ 4113], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[11076], 00:10:02.204 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12649], 60.00th=[13042], 00:10:02.204 | 70.00th=[13435], 80.00th=[14746], 90.00th=[18220], 95.00th=[25297], 00:10:02.204 | 99.00th=[38011], 99.50th=[39584], 99.90th=[43779], 99.95th=[43779], 00:10:02.204 | 99.99th=[43779] 00:10:02.204 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:10:02.204 slat (usec): min=3, max=19342, avg=124.19, stdev=662.95 00:10:02.204 clat (usec): min=3151, max=58387, avg=17975.31, stdev=9715.38 00:10:02.204 lat (usec): min=3159, max=58405, avg=18099.50, stdev=9772.51 00:10:02.204 clat percentiles (usec): 00:10:02.204 | 1.00th=[ 5080], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10421], 00:10:02.204 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12780], 60.00th=[14091], 00:10:02.204 | 70.00th=[26084], 80.00th=[26608], 90.00th=[31589], 95.00th=[34866], 00:10:02.204 | 99.00th=[49546], 99.50th=[53740], 99.90th=[58459], 99.95th=[58459], 00:10:02.204 | 99.99th=[58459] 00:10:02.204 bw ( KiB/s): min=14312, max=18456, per=25.38%, avg=16384.00, stdev=2930.25, samples=2 00:10:02.204 iops : min= 3578, max= 4614, avg=4096.00, stdev=732.56, samples=2 00:10:02.204 lat (usec) : 1000=0.01% 00:10:02.204 lat (msec) : 4=0.74%, 10=7.40%, 20=70.17%, 50=21.20%, 100=0.48% 00:10:02.204 cpu : usr=5.68%, sys=8.57%, ctx=421, majf=0, minf=1 00:10:02.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:02.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.204 issued rwts: total=3888,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.204 job2: (groupid=0, jobs=1): err= 0: pid=2282184: Thu Oct 17 16:38:15 2024 00:10:02.204 read: IOPS=4388, BW=17.1MiB/s (18.0MB/s)(17.2MiB/1001msec) 00:10:02.204 slat (usec): min=3, max=53813, avg=115.39, stdev=1052.55 00:10:02.204 clat (usec): min=579, max=79194, avg=15591.34, stdev=11570.60 00:10:02.204 lat (usec): min=3954, max=79204, avg=15706.73, stdev=11614.50 00:10:02.204 clat percentiles (usec): 00:10:02.204 | 1.00th=[ 7373], 5.00th=[10028], 10.00th=[10945], 20.00th=[12125], 00:10:02.204 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[13042], 00:10:02.204 | 70.00th=[13304], 80.00th=[13566], 90.00th=[15926], 95.00th=[42730], 00:10:02.204 | 99.00th=[77071], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:10:02.204 | 99.99th=[79168] 00:10:02.204 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:02.204 slat (usec): min=4, max=15557, avg=95.11, stdev=588.38 00:10:02.204 clat (usec): min=4355, max=42044, avg=12619.55, stdev=3580.80 00:10:02.204 lat (usec): min=4363, max=42735, avg=12714.66, stdev=3615.50 00:10:02.204 clat percentiles (usec): 00:10:02.204 | 1.00th=[ 6980], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[11469], 00:10:02.204 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:10:02.204 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13698], 95.00th=[15270], 00:10:02.204 | 99.00th=[36439], 99.50th=[36963], 99.90th=[39060], 99.95th=[39060], 00:10:02.204 | 99.99th=[42206] 00:10:02.204 bw ( KiB/s): min=16384, max=20521, per=28.58%, avg=18452.50, stdev=2925.30, samples=2 00:10:02.204 iops : min= 4096, max= 5130, avg=4613.00, stdev=731.15, samples=2 00:10:02.204 lat (usec) : 750=0.01% 00:10:02.204 lat (msec) : 4=0.06%, 10=5.84%, 20=88.66%, 50=4.02%, 100=1.41% 00:10:02.204 cpu : usr=6.30%, sys=11.50%, ctx=363, majf=0, minf=1 00:10:02.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:02.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.204 issued rwts: total=4393,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.204 job3: (groupid=0, jobs=1): err= 0: pid=2282185: Thu Oct 17 16:38:15 2024 00:10:02.204 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:02.204 slat (usec): min=3, max=13566, avg=171.32, stdev=978.58 00:10:02.204 clat (usec): min=3491, max=64114, avg=22453.63, stdev=12179.61 00:10:02.204 lat (usec): min=4040, max=66035, avg=22624.95, stdev=12261.19 00:10:02.204 clat percentiles (usec): 00:10:02.204 | 1.00th=[ 7767], 5.00th=[12518], 10.00th=[13304], 20.00th=[13698], 00:10:02.204 | 30.00th=[14615], 40.00th=[16909], 50.00th=[19792], 60.00th=[20317], 00:10:02.204 | 70.00th=[21627], 80.00th=[26084], 90.00th=[45351], 95.00th=[52167], 00:10:02.204 | 99.00th=[60031], 99.50th=[61604], 99.90th=[62653], 99.95th=[63177], 00:10:02.204 | 99.99th=[64226] 00:10:02.204 write: IOPS=2707, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1004msec); 0 zone resets 00:10:02.204 slat (usec): min=3, max=26311, avg=193.87, stdev=1097.15 00:10:02.204 clat (usec): min=818, max=88880, avg=25724.65, stdev=16166.67 00:10:02.204 lat (usec): min=836, max=88890, avg=25918.52, stdev=16262.86 00:10:02.204 clat percentiles (usec): 00:10:02.204 | 1.00th=[ 3556], 5.00th=[10683], 10.00th=[13173], 20.00th=[13829], 00:10:02.204 | 30.00th=[14615], 40.00th=[17433], 50.00th=[22414], 60.00th=[26084], 00:10:02.204 | 70.00th=[26346], 80.00th=[28705], 90.00th=[55837], 95.00th=[63177], 00:10:02.204 | 99.00th=[77071], 99.50th=[84411], 99.90th=[88605], 99.95th=[88605], 00:10:02.204 | 99.99th=[88605] 00:10:02.204 bw ( KiB/s): min= 8192, max=12536, per=16.05%, avg=10364.00, stdev=3071.67, samples=2 00:10:02.204 iops : min= 2048, max= 3134, avg=2591.00, stdev=767.92, samples=2 00:10:02.204 lat (usec) : 1000=0.06% 00:10:02.204 lat (msec) : 2=0.06%, 4=0.45%, 10=1.95%, 20=46.67%, 50=40.94% 00:10:02.204 lat (msec) : 100=9.87% 00:10:02.204 cpu : usr=3.39%, sys=7.38%, ctx=291, majf=0, minf=1 00:10:02.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:02.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.204 issued rwts: total=2560,2718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.204 00:10:02.205 Run status group 0 (all jobs): 00:10:02.205 READ: bw=59.8MiB/s (62.7MB/s), 9.96MiB/s-17.8MiB/s (10.4MB/s-18.7MB/s), io=60.3MiB (63.3MB), run=1001-1010msec 00:10:02.205 WRITE: bw=63.0MiB/s (66.1MB/s), 10.6MiB/s-18.9MiB/s (11.1MB/s-19.8MB/s), io=63.7MiB (66.8MB), run=1001-1010msec 00:10:02.205 00:10:02.205 Disk stats (read/write): 00:10:02.205 nvme0n1: ios=4146/4423, merge=0/0, ticks=47941/50038, in_queue=97979, util=86.97% 00:10:02.205 nvme0n2: ios=3085/3319, merge=0/0, ticks=35106/51406, in_queue=86512, util=86.99% 00:10:02.205 nvme0n3: ios=3622/3840, merge=0/0, ticks=18409/18178, in_queue=36587, util=99.48% 00:10:02.205 nvme0n4: ios=2097/2464, merge=0/0, ticks=13358/24398, in_queue=37756, util=92.23% 00:10:02.205 16:38:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:02.205 [global] 00:10:02.205 thread=1 00:10:02.205 invalidate=1 00:10:02.205 rw=randwrite 00:10:02.205 time_based=1 00:10:02.205 runtime=1 00:10:02.205 ioengine=libaio 00:10:02.205 direct=1 00:10:02.205 bs=4096 00:10:02.205 iodepth=128 00:10:02.205 norandommap=0 00:10:02.205 numjobs=1 00:10:02.205 00:10:02.205 verify_dump=1 00:10:02.205 verify_backlog=512 00:10:02.205 verify_state_save=0 00:10:02.205 do_verify=1 00:10:02.205 verify=crc32c-intel 00:10:02.205 [job0] 00:10:02.205 filename=/dev/nvme0n1 00:10:02.205 [job1] 00:10:02.205 filename=/dev/nvme0n2 00:10:02.205 [job2] 00:10:02.205 filename=/dev/nvme0n3 00:10:02.205 [job3] 00:10:02.205 filename=/dev/nvme0n4 00:10:02.205 Could not set queue depth (nvme0n1) 00:10:02.205 Could not set queue depth (nvme0n2) 00:10:02.205 Could not set queue depth (nvme0n3) 00:10:02.205 Could not set queue depth (nvme0n4) 00:10:02.463 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.463 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.463 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.463 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.463 fio-3.35 00:10:02.463 Starting 4 threads 00:10:03.840 00:10:03.841 job0: (groupid=0, jobs=1): err= 0: pid=2282531: Thu Oct 17 16:38:17 2024 00:10:03.841 read: IOPS=2953, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1002msec) 00:10:03.841 slat (usec): min=2, max=17455, avg=130.81, stdev=846.37 00:10:03.841 clat (usec): min=573, max=44085, avg=15830.22, stdev=5308.50 00:10:03.841 lat (usec): min=3686, max=44094, avg=15961.03, stdev=5374.17 00:10:03.841 clat percentiles (usec): 00:10:03.841 | 1.00th=[ 4113], 5.00th=[10814], 10.00th=[11469], 20.00th=[12387], 00:10:03.841 | 30.00th=[13960], 40.00th=[14746], 50.00th=[15401], 60.00th=[15664], 00:10:03.841 | 70.00th=[16057], 80.00th=[16450], 90.00th=[19792], 95.00th=[25822], 00:10:03.841 | 99.00th=[36439], 99.50th=[39060], 99.90th=[44303], 99.95th=[44303], 00:10:03.841 | 99.99th=[44303] 00:10:03.841 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:10:03.841 slat (usec): min=4, max=30540, avg=189.29, stdev=1296.48 00:10:03.841 clat (usec): min=6029, max=79641, avg=25721.21, stdev=14894.82 00:10:03.841 lat (usec): min=6048, max=79693, avg=25910.50, stdev=15011.66 00:10:03.841 clat percentiles (usec): 00:10:03.841 | 1.00th=[ 7177], 5.00th=[11338], 10.00th=[11469], 20.00th=[12387], 00:10:03.841 | 30.00th=[13960], 40.00th=[17171], 50.00th=[22152], 60.00th=[25822], 00:10:03.841 | 70.00th=[32637], 80.00th=[36439], 90.00th=[48497], 95.00th=[57934], 00:10:03.841 | 99.00th=[67634], 99.50th=[70779], 99.90th=[71828], 99.95th=[79168], 00:10:03.841 | 99.99th=[79168] 00:10:03.841 bw ( KiB/s): min= 8760, max=15816, per=19.57%, avg=12288.00, stdev=4989.35, samples=2 00:10:03.841 iops : min= 2190, max= 3954, avg=3072.00, stdev=1247.34, samples=2 00:10:03.841 lat (usec) : 750=0.02% 00:10:03.841 lat (msec) : 4=0.30%, 10=2.06%, 20=65.16%, 50=27.79%, 100=4.68% 00:10:03.841 cpu : usr=4.60%, sys=6.19%, ctx=269, majf=0, minf=1 00:10:03.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:03.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.841 issued rwts: total=2959,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.841 job1: (groupid=0, jobs=1): err= 0: pid=2282533: Thu Oct 17 16:38:17 2024 00:10:03.841 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:10:03.841 slat (usec): min=2, max=17953, avg=143.41, stdev=925.79 00:10:03.841 clat (usec): min=8772, max=51003, avg=18430.68, stdev=9179.57 00:10:03.841 lat (usec): min=8791, max=51013, avg=18574.09, stdev=9244.31 00:10:03.841 clat percentiles (usec): 00:10:03.841 | 1.00th=[ 9765], 5.00th=[11338], 10.00th=[11731], 20.00th=[11994], 00:10:03.841 | 30.00th=[12125], 40.00th=[13042], 50.00th=[14091], 60.00th=[15270], 00:10:03.841 | 70.00th=[20317], 80.00th=[24511], 90.00th=[31327], 95.00th=[41157], 00:10:03.841 | 99.00th=[49546], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:10:03.841 | 99.99th=[51119] 00:10:03.841 write: IOPS=3543, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1007msec); 0 zone resets 00:10:03.841 slat (usec): min=3, max=30765, avg=148.06, stdev=985.26 00:10:03.841 clat (usec): min=2344, max=81488, avg=19835.27, stdev=12483.31 00:10:03.841 lat (usec): min=5146, max=81529, avg=19983.33, stdev=12556.82 00:10:03.841 clat percentiles (usec): 00:10:03.841 | 1.00th=[ 6063], 5.00th=[ 8455], 10.00th=[10814], 20.00th=[11731], 00:10:03.841 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12911], 60.00th=[15795], 00:10:03.841 | 70.00th=[22676], 80.00th=[30278], 90.00th=[37487], 95.00th=[49546], 00:10:03.841 | 99.00th=[61604], 99.50th=[61604], 99.90th=[62129], 99.95th=[66323], 00:10:03.841 | 99.99th=[81265] 00:10:03.841 bw ( KiB/s): min=12288, max=15232, per=21.91%, avg=13760.00, stdev=2081.72, samples=2 00:10:03.841 iops : min= 3072, max= 3808, avg=3440.00, stdev=520.43, samples=2 00:10:03.841 lat (msec) : 4=0.02%, 10=6.14%, 20=61.05%, 50=30.06%, 100=2.73% 00:10:03.841 cpu : usr=2.98%, sys=7.95%, ctx=333, majf=0, minf=1 00:10:03.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:03.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.841 issued rwts: total=3072,3568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.841 job2: (groupid=0, jobs=1): err= 0: pid=2282534: Thu Oct 17 16:38:17 2024 00:10:03.841 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:10:03.841 slat (usec): min=2, max=14621, avg=123.87, stdev=859.75 00:10:03.841 clat (usec): min=4895, max=33107, avg=16416.15, stdev=4108.22 00:10:03.841 lat (usec): min=4898, max=33122, avg=16540.01, stdev=4173.01 00:10:03.841 clat percentiles (usec): 00:10:03.841 | 1.00th=[ 4948], 5.00th=[ 9503], 10.00th=[12125], 20.00th=[14091], 00:10:03.841 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15401], 60.00th=[17171], 00:10:03.841 | 70.00th=[18744], 80.00th=[19006], 90.00th=[21103], 95.00th=[24249], 00:10:03.841 | 99.00th=[26870], 99.50th=[29754], 99.90th=[29754], 99.95th=[30540], 00:10:03.841 | 99.99th=[33162] 00:10:03.841 write: IOPS=3876, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1006msec); 0 zone resets 00:10:03.841 slat (usec): min=3, max=21316, avg=134.06, stdev=872.83 00:10:03.841 clat (usec): min=1598, max=56556, avg=17616.40, stdev=8895.66 00:10:03.841 lat (usec): min=5576, max=56571, avg=17750.46, stdev=8964.10 00:10:03.841 clat percentiles (usec): 00:10:03.841 | 1.00th=[ 6652], 5.00th=[ 8455], 10.00th=[11469], 20.00th=[12518], 00:10:03.841 | 30.00th=[13698], 40.00th=[14484], 50.00th=[15139], 60.00th=[15795], 00:10:03.841 | 70.00th=[16712], 80.00th=[18744], 90.00th=[30016], 95.00th=[40109], 00:10:03.841 | 99.00th=[51643], 99.50th=[53740], 99.90th=[56361], 99.95th=[56361], 00:10:03.841 | 99.99th=[56361] 00:10:03.841 bw ( KiB/s): min=13792, max=16384, per=24.02%, avg=15088.00, stdev=1832.82, samples=2 00:10:03.841 iops : min= 3448, max= 4096, avg=3772.00, stdev=458.21, samples=2 00:10:03.841 lat (msec) : 2=0.01%, 10=5.87%, 20=78.70%, 50=14.75%, 100=0.67% 00:10:03.841 cpu : usr=3.18%, sys=6.27%, ctx=344, majf=0, minf=1 00:10:03.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:03.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.841 issued rwts: total=3584,3900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.841 job3: (groupid=0, jobs=1): err= 0: pid=2282535: Thu Oct 17 16:38:17 2024 00:10:03.841 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec) 00:10:03.841 slat (usec): min=3, max=14594, avg=101.13, stdev=697.57 00:10:03.841 clat (usec): min=4270, max=30950, avg=12961.25, stdev=3717.83 00:10:03.841 lat (usec): min=4278, max=31077, avg=13062.38, stdev=3762.49 00:10:03.841 clat percentiles (usec): 00:10:03.841 | 1.00th=[ 5276], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[10814], 00:10:03.841 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12125], 60.00th=[12649], 00:10:03.841 | 70.00th=[13304], 80.00th=[13960], 90.00th=[17695], 95.00th=[20317], 00:10:03.841 | 99.00th=[28181], 99.50th=[29754], 99.90th=[30278], 99.95th=[31065], 00:10:03.841 | 99.99th=[31065] 00:10:03.841 write: IOPS=5275, BW=20.6MiB/s (21.6MB/s)(20.8MiB/1011msec); 0 zone resets 00:10:03.841 slat (usec): min=4, max=10069, avg=77.77, stdev=511.99 00:10:03.841 clat (usec): min=1310, max=42844, avg=11581.77, stdev=3701.39 00:10:03.841 lat (usec): min=1321, max=42858, avg=11659.54, stdev=3733.53 00:10:03.841 clat percentiles (usec): 00:10:03.841 | 1.00th=[ 4178], 5.00th=[ 5997], 10.00th=[ 7308], 20.00th=[ 9503], 00:10:03.841 | 30.00th=[10421], 40.00th=[11207], 50.00th=[11600], 60.00th=[12125], 00:10:03.841 | 70.00th=[12649], 80.00th=[12911], 90.00th=[14877], 95.00th=[16450], 00:10:03.841 | 99.00th=[23462], 99.50th=[29492], 99.90th=[42730], 99.95th=[42730], 00:10:03.841 | 99.99th=[42730] 00:10:03.841 bw ( KiB/s): min=18312, max=23344, per=33.16%, avg=20828.00, stdev=3558.16, samples=2 00:10:03.841 iops : min= 4578, max= 5836, avg=5207.00, stdev=889.54, samples=2 00:10:03.841 lat (msec) : 2=0.13%, 4=0.32%, 10=15.85%, 20=80.12%, 50=3.58% 00:10:03.841 cpu : usr=7.52%, sys=11.98%, ctx=431, majf=0, minf=1 00:10:03.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:03.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.841 issued rwts: total=5120,5334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.841 00:10:03.841 Run status group 0 (all jobs): 00:10:03.841 READ: bw=56.9MiB/s (59.7MB/s), 11.5MiB/s-19.8MiB/s (12.1MB/s-20.7MB/s), io=57.6MiB (60.4MB), run=1002-1011msec 00:10:03.841 WRITE: bw=61.3MiB/s (64.3MB/s), 12.0MiB/s-20.6MiB/s (12.6MB/s-21.6MB/s), io=62.0MiB (65.0MB), run=1002-1011msec 00:10:03.841 00:10:03.841 Disk stats (read/write): 00:10:03.841 nvme0n1: ios=2412/2560, merge=0/0, ticks=19656/32385, in_queue=52041, util=98.70% 00:10:03.841 nvme0n2: ios=2919/3072, merge=0/0, ticks=18150/24690, in_queue=42840, util=88.12% 00:10:03.841 nvme0n3: ios=3051/3079, merge=0/0, ticks=29451/27399, in_queue=56850, util=97.92% 00:10:03.841 nvme0n4: ios=4277/4608, merge=0/0, ticks=50457/49411, in_queue=99868, util=98.64% 00:10:03.841 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:03.841 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2282673 00:10:03.841 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:03.841 16:38:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:03.841 [global] 00:10:03.841 thread=1 00:10:03.841 invalidate=1 00:10:03.841 rw=read 00:10:03.841 time_based=1 00:10:03.841 runtime=10 00:10:03.841 ioengine=libaio 00:10:03.841 direct=1 00:10:03.841 bs=4096 00:10:03.841 iodepth=1 00:10:03.841 norandommap=1 00:10:03.841 numjobs=1 00:10:03.841 00:10:03.841 [job0] 00:10:03.841 filename=/dev/nvme0n1 00:10:03.841 [job1] 00:10:03.841 filename=/dev/nvme0n2 00:10:03.841 [job2] 00:10:03.841 filename=/dev/nvme0n3 00:10:03.841 [job3] 00:10:03.841 filename=/dev/nvme0n4 00:10:03.841 Could not set queue depth (nvme0n1) 00:10:03.841 Could not set queue depth (nvme0n2) 00:10:03.841 Could not set queue depth (nvme0n3) 00:10:03.841 Could not set queue depth (nvme0n4) 00:10:03.841 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.841 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.841 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.841 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.841 fio-3.35 00:10:03.841 Starting 4 threads 00:10:07.125 16:38:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:07.125 16:38:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:07.125 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=5017600, buflen=4096 00:10:07.125 fio: pid=2282770, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:07.125 16:38:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.125 16:38:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:07.125 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9216000, buflen=4096 00:10:07.125 fio: pid=2282769, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:07.384 16:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.384 16:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:07.384 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=20828160, buflen=4096 00:10:07.384 fio: pid=2282761, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:07.643 16:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.643 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=372736, buflen=4096 00:10:07.643 fio: pid=2282762, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:07.643 16:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:07.900 00:10:07.900 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2282761: Thu Oct 17 16:38:21 2024 00:10:07.900 read: IOPS=1435, BW=5743KiB/s (5880kB/s)(19.9MiB/3542msec) 00:10:07.900 slat (usec): min=5, max=31579, avg=23.27, stdev=468.54 00:10:07.900 clat (usec): min=175, max=42146, avg=665.19, stdev=4031.05 00:10:07.900 lat (usec): min=181, max=42178, avg=686.94, stdev=4056.43 00:10:07.900 clat percentiles (usec): 00:10:07.900 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 206], 00:10:07.900 | 30.00th=[ 217], 40.00th=[ 229], 50.00th=[ 255], 60.00th=[ 277], 00:10:07.900 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 396], 00:10:07.900 | 99.00th=[ 1958], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:07.900 | 99.99th=[42206] 00:10:07.900 bw ( KiB/s): min= 184, max=13960, per=56.66%, avg=5166.67, stdev=6084.37, samples=6 00:10:07.900 iops : min= 46, max= 3490, avg=1291.67, stdev=1521.09, samples=6 00:10:07.900 lat (usec) : 250=48.43%, 500=49.55%, 750=0.85%, 1000=0.06% 00:10:07.900 lat (msec) : 2=0.12%, 4=0.02%, 50=0.96% 00:10:07.900 cpu : usr=0.93%, sys=2.97%, ctx=5092, majf=0, minf=2 00:10:07.900 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.900 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.900 issued rwts: total=5086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.900 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.900 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2282762: Thu Oct 17 16:38:21 2024 00:10:07.900 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(364KiB/3795msec) 00:10:07.900 slat (usec): min=10, max=16973, avg=209.12, stdev=1767.04 00:10:07.900 clat (usec): min=505, max=42049, avg=41129.51, stdev=4333.32 00:10:07.900 lat (usec): min=527, max=57982, avg=41340.79, stdev=4678.63 00:10:07.900 clat percentiles (usec): 00:10:07.900 | 1.00th=[ 506], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:07.900 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:07.900 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:07.900 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:07.900 | 99.99th=[42206] 00:10:07.900 bw ( KiB/s): min= 93, max= 104, per=1.05%, avg=96.71, stdev= 3.40, samples=7 00:10:07.900 iops : min= 23, max= 26, avg=24.14, stdev= 0.90, samples=7 00:10:07.900 lat (usec) : 750=1.09% 00:10:07.900 lat (msec) : 50=97.83% 00:10:07.900 cpu : usr=0.13%, sys=0.00%, ctx=95, majf=0, minf=1 00:10:07.900 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.900 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.900 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.900 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.901 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2282769: Thu Oct 17 16:38:21 2024 00:10:07.901 read: IOPS=690, BW=2762KiB/s (2828kB/s)(9000KiB/3259msec) 00:10:07.901 slat (usec): min=4, max=14906, avg=30.52, stdev=313.86 00:10:07.901 clat (usec): min=187, max=42194, avg=1407.75, stdev=6617.10 00:10:07.901 lat (usec): min=204, max=55979, avg=1438.28, stdev=6662.57 00:10:07.901 clat percentiles (usec): 00:10:07.901 | 1.00th=[ 202], 5.00th=[ 217], 10.00th=[ 233], 20.00th=[ 265], 00:10:07.901 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 318], 60.00th=[ 330], 00:10:07.901 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 400], 95.00th=[ 482], 00:10:07.901 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:07.901 | 99.99th=[42206] 00:10:07.901 bw ( KiB/s): min= 216, max=11592, per=32.81%, avg=2992.00, stdev=4363.05, samples=6 00:10:07.901 iops : min= 54, max= 2898, avg=748.00, stdev=1090.76, samples=6 00:10:07.901 lat (usec) : 250=14.30%, 500=81.16%, 750=1.82% 00:10:07.901 lat (msec) : 50=2.67% 00:10:07.901 cpu : usr=0.77%, sys=1.84%, ctx=2252, majf=0, minf=2 00:10:07.901 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.901 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.901 issued rwts: total=2251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.901 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.901 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2282770: Thu Oct 17 16:38:21 2024 00:10:07.901 read: IOPS=416, BW=1664KiB/s (1704kB/s)(4900KiB/2945msec) 00:10:07.901 slat (nsec): min=5310, max=59719, avg=10865.14, stdev=8871.45 00:10:07.901 clat (usec): min=175, max=41064, avg=2371.02, stdev=9070.12 00:10:07.901 lat (usec): min=182, max=41073, avg=2381.88, stdev=9073.99 00:10:07.901 clat percentiles (usec): 00:10:07.901 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 196], 00:10:07.901 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:10:07.901 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 359], 95.00th=[41157], 00:10:07.901 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:07.901 | 99.99th=[41157] 00:10:07.901 bw ( KiB/s): min= 96, max= 128, per=1.12%, avg=102.40, stdev=14.31, samples=5 00:10:07.901 iops : min= 24, max= 32, avg=25.60, stdev= 3.58, samples=5 00:10:07.901 lat (usec) : 250=67.37%, 500=26.92%, 750=0.16% 00:10:07.901 lat (msec) : 2=0.16%, 10=0.08%, 50=5.22% 00:10:07.901 cpu : usr=0.14%, sys=0.58%, ctx=1228, majf=0, minf=1 00:10:07.901 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.901 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.901 issued rwts: total=1226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.901 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.901 00:10:07.901 Run status group 0 (all jobs): 00:10:07.901 READ: bw=9118KiB/s (9337kB/s), 95.9KiB/s-5743KiB/s (98.2kB/s-5880kB/s), io=33.8MiB (35.4MB), run=2945-3795msec 00:10:07.901 00:10:07.901 Disk stats (read/write): 00:10:07.901 nvme0n1: ios=5120/0, merge=0/0, ticks=4094/0, in_queue=4094, util=98.51% 00:10:07.901 nvme0n2: ios=129/0, merge=0/0, ticks=4069/0, in_queue=4069, util=99.36% 00:10:07.901 nvme0n3: ios=2246/0, merge=0/0, ticks=2963/0, in_queue=2963, util=96.35% 00:10:07.901 nvme0n4: ios=1273/0, merge=0/0, ticks=3825/0, in_queue=3825, util=99.73% 00:10:08.158 16:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.158 16:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:08.417 16:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.417 16:38:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:08.675 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.675 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:08.933 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.933 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2282673 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:09.192 nvmf hotplug test: fio failed as expected 00:10:09.192 16:38:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.450 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:09.450 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:09.450 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:09.450 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:09.450 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:09.450 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:09.450 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.708 rmmod nvme_tcp 00:10:09.708 rmmod nvme_fabrics 00:10:09.708 rmmod nvme_keyring 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2280630 ']' 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2280630 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2280630 ']' 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2280630 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2280630 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2280630' 00:10:09.708 killing process with pid 2280630 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2280630 00:10:09.708 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2280630 00:10:09.968 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:09.968 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:09.968 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:09.968 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:09.968 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:09.968 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:09.968 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:09.968 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.968 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.968 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.968 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.968 16:38:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.896 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.896 00:10:11.896 real 0m24.010s 00:10:11.896 user 1m25.228s 00:10:11.896 sys 0m6.446s 00:10:11.896 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.896 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.896 ************************************ 00:10:11.896 END TEST nvmf_fio_target 00:10:11.896 ************************************ 00:10:11.896 16:38:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:11.896 16:38:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:11.896 16:38:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.896 16:38:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.896 ************************************ 00:10:11.896 START TEST nvmf_bdevio 00:10:11.896 ************************************ 00:10:11.896 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:12.155 * Looking for test storage... 00:10:12.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:12.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.155 --rc genhtml_branch_coverage=1 00:10:12.155 --rc genhtml_function_coverage=1 00:10:12.155 --rc genhtml_legend=1 00:10:12.155 --rc geninfo_all_blocks=1 00:10:12.155 --rc geninfo_unexecuted_blocks=1 00:10:12.155 00:10:12.155 ' 00:10:12.155 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:12.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.156 --rc genhtml_branch_coverage=1 00:10:12.156 --rc genhtml_function_coverage=1 00:10:12.156 --rc genhtml_legend=1 00:10:12.156 --rc geninfo_all_blocks=1 00:10:12.156 --rc geninfo_unexecuted_blocks=1 00:10:12.156 00:10:12.156 ' 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:12.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.156 --rc genhtml_branch_coverage=1 00:10:12.156 --rc genhtml_function_coverage=1 00:10:12.156 --rc genhtml_legend=1 00:10:12.156 --rc geninfo_all_blocks=1 00:10:12.156 --rc geninfo_unexecuted_blocks=1 00:10:12.156 00:10:12.156 ' 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:12.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.156 --rc genhtml_branch_coverage=1 00:10:12.156 --rc genhtml_function_coverage=1 00:10:12.156 --rc genhtml_legend=1 00:10:12.156 --rc geninfo_all_blocks=1 00:10:12.156 --rc geninfo_unexecuted_blocks=1 00:10:12.156 00:10:12.156 ' 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.156 16:38:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.691 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.691 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.691 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.691 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.691 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.691 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.691 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.691 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.691 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.691 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:14.691 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:14.692 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:14.692 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:14.692 Found net devices under 0000:09:00.0: cvl_0_0 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:14.692 Found net devices under 0000:09:00.1: cvl_0_1 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.692 16:38:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:10:14.692 00:10:14.692 --- 10.0.0.2 ping statistics --- 00:10:14.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.692 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:10:14.692 00:10:14.692 --- 10.0.0.1 ping statistics --- 00:10:14.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.692 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2285401 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2285401 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2285401 ']' 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.692 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.693 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.693 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.693 [2024-10-17 16:38:28.119266] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:10:14.693 [2024-10-17 16:38:28.119363] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.693 [2024-10-17 16:38:28.194798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.693 [2024-10-17 16:38:28.261340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.693 [2024-10-17 16:38:28.261395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.693 [2024-10-17 16:38:28.261422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.693 [2024-10-17 16:38:28.261435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.693 [2024-10-17 16:38:28.261446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.693 [2024-10-17 16:38:28.263181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:14.693 [2024-10-17 16:38:28.263237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:14.693 [2024-10-17 16:38:28.263268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:14.693 [2024-10-17 16:38:28.263272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.951 [2024-10-17 16:38:28.405027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.951 Malloc0 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.951 [2024-10-17 16:38:28.477444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:14.951 { 00:10:14.951 "params": { 00:10:14.951 "name": "Nvme$subsystem", 00:10:14.951 "trtype": "$TEST_TRANSPORT", 00:10:14.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.951 "adrfam": "ipv4", 00:10:14.951 "trsvcid": "$NVMF_PORT", 00:10:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.951 "hdgst": ${hdgst:-false}, 00:10:14.951 "ddgst": ${ddgst:-false} 00:10:14.951 }, 00:10:14.951 "method": "bdev_nvme_attach_controller" 00:10:14.951 } 00:10:14.951 EOF 00:10:14.951 )") 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:14.951 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:14.952 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:14.952 16:38:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:14.952 "params": { 00:10:14.952 "name": "Nvme1", 00:10:14.952 "trtype": "tcp", 00:10:14.952 "traddr": "10.0.0.2", 00:10:14.952 "adrfam": "ipv4", 00:10:14.952 "trsvcid": "4420", 00:10:14.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.952 "hdgst": false, 00:10:14.952 "ddgst": false 00:10:14.952 }, 00:10:14.952 "method": "bdev_nvme_attach_controller" 00:10:14.952 }' 00:10:14.952 [2024-10-17 16:38:28.528460] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:10:14.952 [2024-10-17 16:38:28.528552] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285548 ] 00:10:14.952 [2024-10-17 16:38:28.589238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.210 [2024-10-17 16:38:28.654630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.210 [2024-10-17 16:38:28.654679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.210 [2024-10-17 16:38:28.654682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.210 I/O targets: 00:10:15.210 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:15.210 00:10:15.210 00:10:15.210 CUnit - A unit testing framework for C - Version 2.1-3 00:10:15.210 http://cunit.sourceforge.net/ 00:10:15.210 00:10:15.210 00:10:15.210 Suite: bdevio tests on: Nvme1n1 00:10:15.210 Test: blockdev write read block ...passed 00:10:15.469 Test: blockdev write zeroes read block ...passed 00:10:15.469 Test: blockdev write zeroes read no split ...passed 00:10:15.469 Test: blockdev write zeroes read split ...passed 00:10:15.469 Test: blockdev write zeroes read split partial ...passed 00:10:15.469 Test: blockdev reset ...[2024-10-17 16:38:29.036901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:15.469 [2024-10-17 16:38:29.037017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e2700 (9): Bad file descriptor 00:10:15.469 [2024-10-17 16:38:29.139397] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:15.469 passed 00:10:15.727 Test: blockdev write read 8 blocks ...passed 00:10:15.727 Test: blockdev write read size > 128k ...passed 00:10:15.727 Test: blockdev write read invalid size ...passed 00:10:15.727 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.727 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.727 Test: blockdev write read max offset ...passed 00:10:15.727 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:15.727 Test: blockdev writev readv 8 blocks ...passed 00:10:15.727 Test: blockdev writev readv 30 x 1block ...passed 00:10:15.727 Test: blockdev writev readv block ...passed 00:10:15.727 Test: blockdev writev readv size > 128k ...passed 00:10:15.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:15.727 Test: blockdev comparev and writev ...[2024-10-17 16:38:29.390184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.727 [2024-10-17 16:38:29.390220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:15.727 [2024-10-17 16:38:29.390244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.727 [2024-10-17 16:38:29.390262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:15.727 [2024-10-17 16:38:29.390584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.727 [2024-10-17 16:38:29.390609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:15.727 [2024-10-17 16:38:29.390632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.727 [2024-10-17 16:38:29.390649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:15.727 [2024-10-17 16:38:29.390951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.727 [2024-10-17 16:38:29.390976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:15.727 [2024-10-17 16:38:29.391008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.727 [2024-10-17 16:38:29.391026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:15.727 [2024-10-17 16:38:29.391335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.727 [2024-10-17 16:38:29.391360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:15.727 [2024-10-17 16:38:29.391380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:15.727 [2024-10-17 16:38:29.391397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:15.985 passed 00:10:15.985 Test: blockdev nvme passthru rw ...passed 00:10:15.985 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:38:29.473249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:15.985 [2024-10-17 16:38:29.473277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:15.985 [2024-10-17 16:38:29.473425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:15.985 [2024-10-17 16:38:29.473447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:15.985 [2024-10-17 16:38:29.473585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:15.985 [2024-10-17 16:38:29.473614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:15.985 [2024-10-17 16:38:29.473756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:15.985 [2024-10-17 16:38:29.473780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:15.985 passed 00:10:15.985 Test: blockdev nvme admin passthru ...passed 00:10:15.985 Test: blockdev copy ...passed 00:10:15.985 00:10:15.985 Run Summary: Type Total Ran Passed Failed Inactive 00:10:15.985 suites 1 1 n/a 0 0 00:10:15.985 tests 23 23 23 0 0 00:10:15.985 asserts 152 152 152 0 n/a 00:10:15.985 00:10:15.985 Elapsed time = 1.377 seconds 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.243 rmmod nvme_tcp 00:10:16.243 rmmod nvme_fabrics 00:10:16.243 rmmod nvme_keyring 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2285401 ']' 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2285401 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2285401 ']' 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2285401 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2285401 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2285401' 00:10:16.243 killing process with pid 2285401 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2285401 00:10:16.243 16:38:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2285401 00:10:16.502 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:16.502 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:16.502 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:16.502 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:16.502 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:16.502 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:16.502 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:16.502 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.502 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.502 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.502 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.502 16:38:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.468 16:38:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.468 00:10:18.468 real 0m6.567s 00:10:18.468 user 0m10.289s 00:10:18.468 sys 0m2.245s 00:10:18.468 16:38:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.468 16:38:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.468 ************************************ 00:10:18.468 END TEST nvmf_bdevio 00:10:18.468 ************************************ 00:10:18.468 16:38:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:18.468 00:10:18.468 real 3m55.096s 00:10:18.468 user 10m17.401s 00:10:18.468 sys 1m5.812s 00:10:18.468 16:38:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.468 16:38:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.468 ************************************ 00:10:18.468 END TEST nvmf_target_core 00:10:18.468 ************************************ 00:10:18.727 16:38:32 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:18.727 16:38:32 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:18.727 16:38:32 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.727 16:38:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.727 ************************************ 00:10:18.727 START TEST nvmf_target_extra 00:10:18.727 ************************************ 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:18.727 * Looking for test storage... 00:10:18.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:18.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.727 --rc genhtml_branch_coverage=1 00:10:18.727 --rc genhtml_function_coverage=1 00:10:18.727 --rc genhtml_legend=1 00:10:18.727 --rc geninfo_all_blocks=1 00:10:18.727 --rc geninfo_unexecuted_blocks=1 00:10:18.727 00:10:18.727 ' 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:18.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.727 --rc genhtml_branch_coverage=1 00:10:18.727 --rc genhtml_function_coverage=1 00:10:18.727 --rc genhtml_legend=1 00:10:18.727 --rc geninfo_all_blocks=1 00:10:18.727 --rc geninfo_unexecuted_blocks=1 00:10:18.727 00:10:18.727 ' 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:18.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.727 --rc genhtml_branch_coverage=1 00:10:18.727 --rc genhtml_function_coverage=1 00:10:18.727 --rc genhtml_legend=1 00:10:18.727 --rc geninfo_all_blocks=1 00:10:18.727 --rc geninfo_unexecuted_blocks=1 00:10:18.727 00:10:18.727 ' 00:10:18.727 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:18.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.728 --rc genhtml_branch_coverage=1 00:10:18.728 --rc genhtml_function_coverage=1 00:10:18.728 --rc genhtml_legend=1 00:10:18.728 --rc geninfo_all_blocks=1 00:10:18.728 --rc geninfo_unexecuted_blocks=1 00:10:18.728 00:10:18.728 ' 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:18.728 ************************************ 00:10:18.728 START TEST nvmf_example 00:10:18.728 ************************************ 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:18.728 * Looking for test storage... 00:10:18.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:18.728 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:18.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.987 --rc genhtml_branch_coverage=1 00:10:18.987 --rc genhtml_function_coverage=1 00:10:18.987 --rc genhtml_legend=1 00:10:18.987 --rc geninfo_all_blocks=1 00:10:18.987 --rc geninfo_unexecuted_blocks=1 00:10:18.987 00:10:18.987 ' 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:18.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.987 --rc genhtml_branch_coverage=1 00:10:18.987 --rc genhtml_function_coverage=1 00:10:18.987 --rc genhtml_legend=1 00:10:18.987 --rc geninfo_all_blocks=1 00:10:18.987 --rc geninfo_unexecuted_blocks=1 00:10:18.987 00:10:18.987 ' 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:18.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.987 --rc genhtml_branch_coverage=1 00:10:18.987 --rc genhtml_function_coverage=1 00:10:18.987 --rc genhtml_legend=1 00:10:18.987 --rc geninfo_all_blocks=1 00:10:18.987 --rc geninfo_unexecuted_blocks=1 00:10:18.987 00:10:18.987 ' 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:18.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.987 --rc genhtml_branch_coverage=1 00:10:18.987 --rc genhtml_function_coverage=1 00:10:18.987 --rc genhtml_legend=1 00:10:18.987 --rc geninfo_all_blocks=1 00:10:18.987 --rc geninfo_unexecuted_blocks=1 00:10:18.987 00:10:18.987 ' 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:18.987 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:18.988 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.521 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:21.521 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:21.522 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:21.522 Found net devices under 0000:09:00.0: cvl_0_0 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:21.522 Found net devices under 0000:09:00.1: cvl_0_1 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:21.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:10:21.522 00:10:21.522 --- 10.0.0.2 ping statistics --- 00:10:21.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.522 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:10:21.522 00:10:21.522 --- 10.0.0.1 ping statistics --- 00:10:21.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.522 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2287699 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2287699 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2287699 ']' 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.522 16:38:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:22.457 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:22.457 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:34.656 Initializing NVMe Controllers 00:10:34.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:34.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:34.656 Initialization complete. Launching workers. 00:10:34.656 ======================================================== 00:10:34.656 Latency(us) 00:10:34.656 Device Information : IOPS MiB/s Average min max 00:10:34.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14314.07 55.91 4471.91 884.63 20106.89 00:10:34.656 ======================================================== 00:10:34.656 Total : 14314.07 55.91 4471.91 884.63 20106.89 00:10:34.656 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.656 rmmod nvme_tcp 00:10:34.656 rmmod nvme_fabrics 00:10:34.656 rmmod nvme_keyring 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 2287699 ']' 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 2287699 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2287699 ']' 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2287699 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2287699 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2287699' 00:10:34.656 killing process with pid 2287699 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2287699 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2287699 00:10:34.656 nvmf threads initialize successfully 00:10:34.656 bdev subsystem init successfully 00:10:34.656 created a nvmf target service 00:10:34.656 create targets's poll groups done 00:10:34.656 all subsystems of target started 00:10:34.656 nvmf target is running 00:10:34.656 all subsystems of target stopped 00:10:34.656 destroy targets's poll groups done 00:10:34.656 destroyed the nvmf target service 00:10:34.656 bdev subsystem finish successfully 00:10:34.656 nvmf threads destroy successfully 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.656 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.915 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.915 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:34.915 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:34.915 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.915 00:10:34.915 real 0m16.230s 00:10:34.915 user 0m45.537s 00:10:34.915 sys 0m3.353s 00:10:34.915 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.915 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.915 ************************************ 00:10:34.915 END TEST nvmf_example 00:10:34.915 ************************************ 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:35.178 ************************************ 00:10:35.178 START TEST nvmf_filesystem 00:10:35.178 ************************************ 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:35.178 * Looking for test storage... 00:10:35.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.178 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:35.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.179 --rc genhtml_branch_coverage=1 00:10:35.179 --rc genhtml_function_coverage=1 00:10:35.179 --rc genhtml_legend=1 00:10:35.179 --rc geninfo_all_blocks=1 00:10:35.179 --rc geninfo_unexecuted_blocks=1 00:10:35.179 00:10:35.179 ' 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:35.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.179 --rc genhtml_branch_coverage=1 00:10:35.179 --rc genhtml_function_coverage=1 00:10:35.179 --rc genhtml_legend=1 00:10:35.179 --rc geninfo_all_blocks=1 00:10:35.179 --rc geninfo_unexecuted_blocks=1 00:10:35.179 00:10:35.179 ' 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:35.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.179 --rc genhtml_branch_coverage=1 00:10:35.179 --rc genhtml_function_coverage=1 00:10:35.179 --rc genhtml_legend=1 00:10:35.179 --rc geninfo_all_blocks=1 00:10:35.179 --rc geninfo_unexecuted_blocks=1 00:10:35.179 00:10:35.179 ' 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:35.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.179 --rc genhtml_branch_coverage=1 00:10:35.179 --rc genhtml_function_coverage=1 00:10:35.179 --rc genhtml_legend=1 00:10:35.179 --rc geninfo_all_blocks=1 00:10:35.179 --rc geninfo_unexecuted_blocks=1 00:10:35.179 00:10:35.179 ' 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:35.179 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:35.180 #define SPDK_CONFIG_H 00:10:35.180 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:35.180 #define SPDK_CONFIG_APPS 1 00:10:35.180 #define SPDK_CONFIG_ARCH native 00:10:35.180 #undef SPDK_CONFIG_ASAN 00:10:35.180 #undef SPDK_CONFIG_AVAHI 00:10:35.180 #undef SPDK_CONFIG_CET 00:10:35.180 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:35.180 #define SPDK_CONFIG_COVERAGE 1 00:10:35.180 #define SPDK_CONFIG_CROSS_PREFIX 00:10:35.180 #undef SPDK_CONFIG_CRYPTO 00:10:35.180 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:35.180 #undef SPDK_CONFIG_CUSTOMOCF 00:10:35.180 #undef SPDK_CONFIG_DAOS 00:10:35.180 #define SPDK_CONFIG_DAOS_DIR 00:10:35.180 #define SPDK_CONFIG_DEBUG 1 00:10:35.180 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:35.180 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:35.180 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:35.180 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:35.180 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:35.180 #undef SPDK_CONFIG_DPDK_UADK 00:10:35.180 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:35.180 #define SPDK_CONFIG_EXAMPLES 1 00:10:35.180 #undef SPDK_CONFIG_FC 00:10:35.180 #define SPDK_CONFIG_FC_PATH 00:10:35.180 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:35.180 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:35.180 #define SPDK_CONFIG_FSDEV 1 00:10:35.180 #undef SPDK_CONFIG_FUSE 00:10:35.180 #undef SPDK_CONFIG_FUZZER 00:10:35.180 #define SPDK_CONFIG_FUZZER_LIB 00:10:35.180 #undef SPDK_CONFIG_GOLANG 00:10:35.180 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:35.180 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:35.180 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:35.180 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:35.180 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:35.180 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:35.180 #undef SPDK_CONFIG_HAVE_LZ4 00:10:35.180 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:35.180 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:35.180 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:35.180 #define SPDK_CONFIG_IDXD 1 00:10:35.180 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:35.180 #undef SPDK_CONFIG_IPSEC_MB 00:10:35.180 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:35.180 #define SPDK_CONFIG_ISAL 1 00:10:35.180 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:35.180 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:35.180 #define SPDK_CONFIG_LIBDIR 00:10:35.180 #undef SPDK_CONFIG_LTO 00:10:35.180 #define SPDK_CONFIG_MAX_LCORES 128 00:10:35.180 #define SPDK_CONFIG_NVME_CUSE 1 00:10:35.180 #undef SPDK_CONFIG_OCF 00:10:35.180 #define SPDK_CONFIG_OCF_PATH 00:10:35.180 #define SPDK_CONFIG_OPENSSL_PATH 00:10:35.180 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:35.180 #define SPDK_CONFIG_PGO_DIR 00:10:35.180 #undef SPDK_CONFIG_PGO_USE 00:10:35.180 #define SPDK_CONFIG_PREFIX /usr/local 00:10:35.180 #undef SPDK_CONFIG_RAID5F 00:10:35.180 #undef SPDK_CONFIG_RBD 00:10:35.180 #define SPDK_CONFIG_RDMA 1 00:10:35.180 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:35.180 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:35.180 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:35.180 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:35.180 #define SPDK_CONFIG_SHARED 1 00:10:35.180 #undef SPDK_CONFIG_SMA 00:10:35.180 #define SPDK_CONFIG_TESTS 1 00:10:35.180 #undef SPDK_CONFIG_TSAN 00:10:35.180 #define SPDK_CONFIG_UBLK 1 00:10:35.180 #define SPDK_CONFIG_UBSAN 1 00:10:35.180 #undef SPDK_CONFIG_UNIT_TESTS 00:10:35.180 #undef SPDK_CONFIG_URING 00:10:35.180 #define SPDK_CONFIG_URING_PATH 00:10:35.180 #undef SPDK_CONFIG_URING_ZNS 00:10:35.180 #undef SPDK_CONFIG_USDT 00:10:35.180 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:35.180 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:35.180 #define SPDK_CONFIG_VFIO_USER 1 00:10:35.180 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:35.180 #define SPDK_CONFIG_VHOST 1 00:10:35.180 #define SPDK_CONFIG_VIRTIO 1 00:10:35.180 #undef SPDK_CONFIG_VTUNE 00:10:35.180 #define SPDK_CONFIG_VTUNE_DIR 00:10:35.180 #define SPDK_CONFIG_WERROR 1 00:10:35.180 #define SPDK_CONFIG_WPDK_DIR 00:10:35.180 #undef SPDK_CONFIG_XNVME 00:10:35.180 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:35.180 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:35.181 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:35.182 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2289411 ]] 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2289411 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.OEtbgK 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.OEtbgK/tests/target /tmp/spdk.OEtbgK 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=661032960 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4623396864 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=50886942720 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988524032 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11101581312 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982893568 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994259968 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375265280 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22441984 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=29919649792 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1074614272 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:35.183 * Looking for test storage... 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=50886942720 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=13316173824 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:35.183 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:35.184 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:35.184 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:35.184 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:35.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.444 --rc genhtml_branch_coverage=1 00:10:35.444 --rc genhtml_function_coverage=1 00:10:35.444 --rc genhtml_legend=1 00:10:35.444 --rc geninfo_all_blocks=1 00:10:35.444 --rc geninfo_unexecuted_blocks=1 00:10:35.444 00:10:35.444 ' 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:35.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.444 --rc genhtml_branch_coverage=1 00:10:35.444 --rc genhtml_function_coverage=1 00:10:35.444 --rc genhtml_legend=1 00:10:35.444 --rc geninfo_all_blocks=1 00:10:35.444 --rc geninfo_unexecuted_blocks=1 00:10:35.444 00:10:35.444 ' 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:35.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.444 --rc genhtml_branch_coverage=1 00:10:35.444 --rc genhtml_function_coverage=1 00:10:35.444 --rc genhtml_legend=1 00:10:35.444 --rc geninfo_all_blocks=1 00:10:35.444 --rc geninfo_unexecuted_blocks=1 00:10:35.444 00:10:35.444 ' 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:35.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.444 --rc genhtml_branch_coverage=1 00:10:35.444 --rc genhtml_function_coverage=1 00:10:35.444 --rc genhtml_legend=1 00:10:35.444 --rc geninfo_all_blocks=1 00:10:35.444 --rc geninfo_unexecuted_blocks=1 00:10:35.444 00:10:35.444 ' 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.444 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.445 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.445 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:35.445 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:35.445 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:35.445 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:37.979 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:37.979 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:37.979 Found net devices under 0000:09:00.0: cvl_0_0 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:37.979 Found net devices under 0000:09:00.1: cvl_0_1 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:37.979 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:37.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:10:37.980 00:10:37.980 --- 10.0.0.2 ping statistics --- 00:10:37.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.980 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:10:37.980 00:10:37.980 --- 10.0.0.1 ping statistics --- 00:10:37.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.980 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.980 ************************************ 00:10:37.980 START TEST nvmf_filesystem_no_in_capsule 00:10:37.980 ************************************ 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2291155 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2291155 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2291155 ']' 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.980 [2024-10-17 16:38:51.293609] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:10:37.980 [2024-10-17 16:38:51.293683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.980 [2024-10-17 16:38:51.356799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.980 [2024-10-17 16:38:51.414901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.980 [2024-10-17 16:38:51.414955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.980 [2024-10-17 16:38:51.414968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.980 [2024-10-17 16:38:51.414979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.980 [2024-10-17 16:38:51.414988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.980 [2024-10-17 16:38:51.416605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.980 [2024-10-17 16:38:51.416660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.980 [2024-10-17 16:38:51.416730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.980 [2024-10-17 16:38:51.416733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.980 [2024-10-17 16:38:51.560641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.980 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.239 Malloc1 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.239 [2024-10-17 16:38:51.753047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:38.239 { 00:10:38.239 "name": "Malloc1", 00:10:38.239 "aliases": [ 00:10:38.239 "2ee723ac-1490-4ea3-80af-cb56d3e72e82" 00:10:38.239 ], 00:10:38.239 "product_name": "Malloc disk", 00:10:38.239 "block_size": 512, 00:10:38.239 "num_blocks": 1048576, 00:10:38.239 "uuid": "2ee723ac-1490-4ea3-80af-cb56d3e72e82", 00:10:38.239 "assigned_rate_limits": { 00:10:38.239 "rw_ios_per_sec": 0, 00:10:38.239 "rw_mbytes_per_sec": 0, 00:10:38.239 "r_mbytes_per_sec": 0, 00:10:38.239 "w_mbytes_per_sec": 0 00:10:38.239 }, 00:10:38.239 "claimed": true, 00:10:38.239 "claim_type": "exclusive_write", 00:10:38.239 "zoned": false, 00:10:38.239 "supported_io_types": { 00:10:38.239 "read": true, 00:10:38.239 "write": true, 00:10:38.239 "unmap": true, 00:10:38.239 "flush": true, 00:10:38.239 "reset": true, 00:10:38.239 "nvme_admin": false, 00:10:38.239 "nvme_io": false, 00:10:38.239 "nvme_io_md": false, 00:10:38.239 "write_zeroes": true, 00:10:38.239 "zcopy": true, 00:10:38.239 "get_zone_info": false, 00:10:38.239 "zone_management": false, 00:10:38.239 "zone_append": false, 00:10:38.239 "compare": false, 00:10:38.239 "compare_and_write": false, 00:10:38.239 "abort": true, 00:10:38.239 "seek_hole": false, 00:10:38.239 "seek_data": false, 00:10:38.239 "copy": true, 00:10:38.239 "nvme_iov_md": false 00:10:38.239 }, 00:10:38.239 "memory_domains": [ 00:10:38.239 { 00:10:38.239 "dma_device_id": "system", 00:10:38.239 "dma_device_type": 1 00:10:38.239 }, 00:10:38.239 { 00:10:38.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.239 "dma_device_type": 2 00:10:38.239 } 00:10:38.239 ], 00:10:38.239 "driver_specific": {} 00:10:38.239 } 00:10:38.239 ]' 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:38.239 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:39.172 16:38:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.173 16:38:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:39.173 16:38:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.173 16:38:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:39.173 16:38:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:41.075 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:41.335 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:42.273 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.211 ************************************ 00:10:43.211 START TEST filesystem_ext4 00:10:43.211 ************************************ 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:43.211 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:43.212 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:43.212 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:43.212 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:43.212 mke2fs 1.47.0 (5-Feb-2023) 00:10:43.212 Discarding device blocks: 0/522240 done 00:10:43.212 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:43.212 Filesystem UUID: 5dbb13a5-5bdb-4dac-bf11-338e3fbcd333 00:10:43.212 Superblock backups stored on blocks: 00:10:43.212 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:43.212 00:10:43.212 Allocating group tables: 0/64 done 00:10:43.212 Writing inode tables: 0/64 done 00:10:43.780 Creating journal (8192 blocks): done 00:10:45.941 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:10:45.941 00:10:45.941 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:45.941 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2291155 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.511 00:10:52.511 real 0m8.484s 00:10:52.511 user 0m0.025s 00:10:52.511 sys 0m0.062s 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:52.511 ************************************ 00:10:52.511 END TEST filesystem_ext4 00:10:52.511 ************************************ 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.511 ************************************ 00:10:52.511 START TEST filesystem_btrfs 00:10:52.511 ************************************ 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:52.511 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:52.511 btrfs-progs v6.8.1 00:10:52.511 See https://btrfs.readthedocs.io for more information. 00:10:52.511 00:10:52.511 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:52.512 NOTE: several default settings have changed in version 5.15, please make sure 00:10:52.512 this does not affect your deployments: 00:10:52.512 - DUP for metadata (-m dup) 00:10:52.512 - enabled no-holes (-O no-holes) 00:10:52.512 - enabled free-space-tree (-R free-space-tree) 00:10:52.512 00:10:52.512 Label: (null) 00:10:52.512 UUID: 0aa1ff00-2109-4a27-8180-1a7627bfe8ea 00:10:52.512 Node size: 16384 00:10:52.512 Sector size: 4096 (CPU page size: 4096) 00:10:52.512 Filesystem size: 510.00MiB 00:10:52.512 Block group profiles: 00:10:52.512 Data: single 8.00MiB 00:10:52.512 Metadata: DUP 32.00MiB 00:10:52.512 System: DUP 8.00MiB 00:10:52.512 SSD detected: yes 00:10:52.512 Zoned device: no 00:10:52.512 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:52.512 Checksum: crc32c 00:10:52.512 Number of devices: 1 00:10:52.512 Devices: 00:10:52.512 ID SIZE PATH 00:10:52.512 1 510.00MiB /dev/nvme0n1p1 00:10:52.512 00:10:52.512 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:52.512 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.770 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.770 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:52.770 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.770 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:52.770 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:52.770 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2291155 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:53.028 00:10:53.028 real 0m1.280s 00:10:53.028 user 0m0.017s 00:10:53.028 sys 0m0.106s 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:53.028 ************************************ 00:10:53.028 END TEST filesystem_btrfs 00:10:53.028 ************************************ 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.028 ************************************ 00:10:53.028 START TEST filesystem_xfs 00:10:53.028 ************************************ 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:53.028 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:53.028 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:53.028 = sectsz=512 attr=2, projid32bit=1 00:10:53.028 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:53.028 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:53.028 data = bsize=4096 blocks=130560, imaxpct=25 00:10:53.028 = sunit=0 swidth=0 blks 00:10:53.028 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:53.028 log =internal log bsize=4096 blocks=16384, version=2 00:10:53.028 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:53.028 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:53.966 Discarding blocks...Done. 00:10:53.966 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:53.966 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2291155 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:55.872 00:10:55.872 real 0m2.723s 00:10:55.872 user 0m0.007s 00:10:55.872 sys 0m0.062s 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:55.872 ************************************ 00:10:55.872 END TEST filesystem_xfs 00:10:55.872 ************************************ 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2291155 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2291155 ']' 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2291155 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2291155 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2291155' 00:10:55.872 killing process with pid 2291155 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2291155 00:10:55.872 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2291155 00:10:56.443 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:56.443 00:10:56.443 real 0m18.760s 00:10:56.443 user 1m12.739s 00:10:56.443 sys 0m2.210s 00:10:56.443 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.443 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.443 ************************************ 00:10:56.443 END TEST nvmf_filesystem_no_in_capsule 00:10:56.443 ************************************ 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.443 ************************************ 00:10:56.443 START TEST nvmf_filesystem_in_capsule 00:10:56.443 ************************************ 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2294147 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2294147 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2294147 ']' 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:56.443 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.443 [2024-10-17 16:39:10.110855] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:10:56.443 [2024-10-17 16:39:10.110954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.701 [2024-10-17 16:39:10.177179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.701 [2024-10-17 16:39:10.239141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.701 [2024-10-17 16:39:10.239209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.701 [2024-10-17 16:39:10.239236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.701 [2024-10-17 16:39:10.239247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.701 [2024-10-17 16:39:10.239257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.701 [2024-10-17 16:39:10.240860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.701 [2024-10-17 16:39:10.240935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.701 [2024-10-17 16:39:10.241027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.701 [2024-10-17 16:39:10.241035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.702 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.702 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:56.702 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:56.702 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.702 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.702 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.702 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:56.702 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:56.702 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.702 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.702 [2024-10-17 16:39:10.390852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.960 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.960 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:56.960 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.960 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.960 Malloc1 00:10:56.960 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.960 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.960 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.960 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.960 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.961 [2024-10-17 16:39:10.570299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:56.961 { 00:10:56.961 "name": "Malloc1", 00:10:56.961 "aliases": [ 00:10:56.961 "518dffb4-91b6-4b7d-9f30-26d26e7e9c55" 00:10:56.961 ], 00:10:56.961 "product_name": "Malloc disk", 00:10:56.961 "block_size": 512, 00:10:56.961 "num_blocks": 1048576, 00:10:56.961 "uuid": "518dffb4-91b6-4b7d-9f30-26d26e7e9c55", 00:10:56.961 "assigned_rate_limits": { 00:10:56.961 "rw_ios_per_sec": 0, 00:10:56.961 "rw_mbytes_per_sec": 0, 00:10:56.961 "r_mbytes_per_sec": 0, 00:10:56.961 "w_mbytes_per_sec": 0 00:10:56.961 }, 00:10:56.961 "claimed": true, 00:10:56.961 "claim_type": "exclusive_write", 00:10:56.961 "zoned": false, 00:10:56.961 "supported_io_types": { 00:10:56.961 "read": true, 00:10:56.961 "write": true, 00:10:56.961 "unmap": true, 00:10:56.961 "flush": true, 00:10:56.961 "reset": true, 00:10:56.961 "nvme_admin": false, 00:10:56.961 "nvme_io": false, 00:10:56.961 "nvme_io_md": false, 00:10:56.961 "write_zeroes": true, 00:10:56.961 "zcopy": true, 00:10:56.961 "get_zone_info": false, 00:10:56.961 "zone_management": false, 00:10:56.961 "zone_append": false, 00:10:56.961 "compare": false, 00:10:56.961 "compare_and_write": false, 00:10:56.961 "abort": true, 00:10:56.961 "seek_hole": false, 00:10:56.961 "seek_data": false, 00:10:56.961 "copy": true, 00:10:56.961 "nvme_iov_md": false 00:10:56.961 }, 00:10:56.961 "memory_domains": [ 00:10:56.961 { 00:10:56.961 "dma_device_id": "system", 00:10:56.961 "dma_device_type": 1 00:10:56.961 }, 00:10:56.961 { 00:10:56.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.961 "dma_device_type": 2 00:10:56.961 } 00:10:56.961 ], 00:10:56.961 "driver_specific": {} 00:10:56.961 } 00:10:56.961 ]' 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:56.961 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:57.222 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:57.222 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:57.222 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:57.222 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:57.222 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:57.791 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.791 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:57.791 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.791 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:57.791 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:59.697 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:59.697 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:59.697 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.697 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:59.697 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.697 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:59.697 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:59.697 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:59.956 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:59.956 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:59.956 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:59.956 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:59.956 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:59.956 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:59.956 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:59.956 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:59.956 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:59.956 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:00.522 16:39:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.903 ************************************ 00:11:01.903 START TEST filesystem_in_capsule_ext4 00:11:01.903 ************************************ 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:01.903 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:01.903 mke2fs 1.47.0 (5-Feb-2023) 00:11:01.903 Discarding device blocks: 0/522240 done 00:11:01.903 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:01.903 Filesystem UUID: 4dbc753d-9899-4d32-905c-5c45c9713faf 00:11:01.903 Superblock backups stored on blocks: 00:11:01.903 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:01.903 00:11:01.903 Allocating group tables: 0/64 done 00:11:01.903 Writing inode tables: 0/64 done 00:11:02.162 Creating journal (8192 blocks): done 00:11:02.162 Writing superblocks and filesystem accounting information: 0/64 done 00:11:02.162 00:11:02.162 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:02.162 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.541 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.541 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2294147 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.542 00:11:07.542 real 0m5.960s 00:11:07.542 user 0m0.014s 00:11:07.542 sys 0m0.058s 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:07.542 ************************************ 00:11:07.542 END TEST filesystem_in_capsule_ext4 00:11:07.542 ************************************ 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.542 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.802 ************************************ 00:11:07.802 START TEST filesystem_in_capsule_btrfs 00:11:07.802 ************************************ 00:11:07.802 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:07.802 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:07.802 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.802 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:07.802 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:07.802 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:07.802 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:07.802 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:07.802 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:07.802 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:07.802 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:08.062 btrfs-progs v6.8.1 00:11:08.062 See https://btrfs.readthedocs.io for more information. 00:11:08.062 00:11:08.062 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:08.062 NOTE: several default settings have changed in version 5.15, please make sure 00:11:08.062 this does not affect your deployments: 00:11:08.062 - DUP for metadata (-m dup) 00:11:08.062 - enabled no-holes (-O no-holes) 00:11:08.062 - enabled free-space-tree (-R free-space-tree) 00:11:08.062 00:11:08.062 Label: (null) 00:11:08.062 UUID: 1fc16a64-206b-426b-9c5e-68244300d91e 00:11:08.062 Node size: 16384 00:11:08.062 Sector size: 4096 (CPU page size: 4096) 00:11:08.062 Filesystem size: 510.00MiB 00:11:08.062 Block group profiles: 00:11:08.062 Data: single 8.00MiB 00:11:08.062 Metadata: DUP 32.00MiB 00:11:08.062 System: DUP 8.00MiB 00:11:08.062 SSD detected: yes 00:11:08.062 Zoned device: no 00:11:08.062 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:08.062 Checksum: crc32c 00:11:08.062 Number of devices: 1 00:11:08.062 Devices: 00:11:08.062 ID SIZE PATH 00:11:08.062 1 510.00MiB /dev/nvme0n1p1 00:11:08.062 00:11:08.062 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:08.062 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2294147 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.002 00:11:09.002 real 0m1.193s 00:11:09.002 user 0m0.012s 00:11:09.002 sys 0m0.107s 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:09.002 ************************************ 00:11:09.002 END TEST filesystem_in_capsule_btrfs 00:11:09.002 ************************************ 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.002 ************************************ 00:11:09.002 START TEST filesystem_in_capsule_xfs 00:11:09.002 ************************************ 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:09.002 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:09.002 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:09.002 = sectsz=512 attr=2, projid32bit=1 00:11:09.002 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:09.002 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:09.002 data = bsize=4096 blocks=130560, imaxpct=25 00:11:09.002 = sunit=0 swidth=0 blks 00:11:09.002 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:09.002 log =internal log bsize=4096 blocks=16384, version=2 00:11:09.002 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:09.002 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:09.941 Discarding blocks...Done. 00:11:09.941 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:09.941 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2294147 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:12.476 00:11:12.476 real 0m3.308s 00:11:12.476 user 0m0.014s 00:11:12.476 sys 0m0.066s 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:12.476 ************************************ 00:11:12.476 END TEST filesystem_in_capsule_xfs 00:11:12.476 ************************************ 00:11:12.476 16:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:12.476 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:12.476 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2294147 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2294147 ']' 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2294147 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2294147 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2294147' 00:11:12.735 killing process with pid 2294147 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2294147 00:11:12.735 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2294147 00:11:12.995 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:12.995 00:11:12.995 real 0m16.589s 00:11:12.995 user 1m4.275s 00:11:12.995 sys 0m2.003s 00:11:12.995 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.995 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.995 ************************************ 00:11:12.995 END TEST nvmf_filesystem_in_capsule 00:11:12.995 ************************************ 00:11:12.995 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:12.995 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:12.995 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:12.995 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.995 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:12.995 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.995 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.995 rmmod nvme_tcp 00:11:13.256 rmmod nvme_fabrics 00:11:13.256 rmmod nvme_keyring 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.256 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.166 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:15.166 00:11:15.166 real 0m40.132s 00:11:15.166 user 2m18.116s 00:11:15.166 sys 0m5.920s 00:11:15.166 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.166 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:15.166 ************************************ 00:11:15.166 END TEST nvmf_filesystem 00:11:15.166 ************************************ 00:11:15.166 16:39:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:15.166 16:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:15.166 16:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.166 16:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:15.166 ************************************ 00:11:15.166 START TEST nvmf_target_discovery 00:11:15.166 ************************************ 00:11:15.166 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:15.424 * Looking for test storage... 00:11:15.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:15.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.424 --rc genhtml_branch_coverage=1 00:11:15.424 --rc genhtml_function_coverage=1 00:11:15.424 --rc genhtml_legend=1 00:11:15.424 --rc geninfo_all_blocks=1 00:11:15.424 --rc geninfo_unexecuted_blocks=1 00:11:15.424 00:11:15.424 ' 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:15.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.424 --rc genhtml_branch_coverage=1 00:11:15.424 --rc genhtml_function_coverage=1 00:11:15.424 --rc genhtml_legend=1 00:11:15.424 --rc geninfo_all_blocks=1 00:11:15.424 --rc geninfo_unexecuted_blocks=1 00:11:15.424 00:11:15.424 ' 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:15.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.424 --rc genhtml_branch_coverage=1 00:11:15.424 --rc genhtml_function_coverage=1 00:11:15.424 --rc genhtml_legend=1 00:11:15.424 --rc geninfo_all_blocks=1 00:11:15.424 --rc geninfo_unexecuted_blocks=1 00:11:15.424 00:11:15.424 ' 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:15.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.424 --rc genhtml_branch_coverage=1 00:11:15.424 --rc genhtml_function_coverage=1 00:11:15.424 --rc genhtml_legend=1 00:11:15.424 --rc geninfo_all_blocks=1 00:11:15.424 --rc geninfo_unexecuted_blocks=1 00:11:15.424 00:11:15.424 ' 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.424 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.424 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.424 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:15.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.425 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.956 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:17.957 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:17.957 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:17.957 Found net devices under 0000:09:00.0: cvl_0_0 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:17.957 Found net devices under 0000:09:00.1: cvl_0_1 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:11:17.957 00:11:17.957 --- 10.0.0.2 ping statistics --- 00:11:17.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.957 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:11:17.957 00:11:17.957 --- 10.0.0.1 ping statistics --- 00:11:17.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.957 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:17.957 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=2298296 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 2298296 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2298296 ']' 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.958 [2024-10-17 16:39:31.357054] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:11:17.958 [2024-10-17 16:39:31.357126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.958 [2024-10-17 16:39:31.424227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.958 [2024-10-17 16:39:31.493124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.958 [2024-10-17 16:39:31.493185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.958 [2024-10-17 16:39:31.493201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.958 [2024-10-17 16:39:31.493215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.958 [2024-10-17 16:39:31.493226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.958 [2024-10-17 16:39:31.494970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.958 [2024-10-17 16:39:31.495032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.958 [2024-10-17 16:39:31.495071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.958 [2024-10-17 16:39:31.495074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.958 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 [2024-10-17 16:39:31.647439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 Null1 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 [2024-10-17 16:39:31.687718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 Null2 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 Null3 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 Null4 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.219 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.220 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:11:18.478 00:11:18.478 Discovery Log Number of Records 6, Generation counter 6 00:11:18.478 =====Discovery Log Entry 0====== 00:11:18.478 trtype: tcp 00:11:18.478 adrfam: ipv4 00:11:18.478 subtype: current discovery subsystem 00:11:18.478 treq: not required 00:11:18.478 portid: 0 00:11:18.478 trsvcid: 4420 00:11:18.478 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:18.478 traddr: 10.0.0.2 00:11:18.478 eflags: explicit discovery connections, duplicate discovery information 00:11:18.478 sectype: none 00:11:18.478 =====Discovery Log Entry 1====== 00:11:18.478 trtype: tcp 00:11:18.478 adrfam: ipv4 00:11:18.478 subtype: nvme subsystem 00:11:18.478 treq: not required 00:11:18.478 portid: 0 00:11:18.478 trsvcid: 4420 00:11:18.478 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:18.478 traddr: 10.0.0.2 00:11:18.478 eflags: none 00:11:18.478 sectype: none 00:11:18.478 =====Discovery Log Entry 2====== 00:11:18.478 trtype: tcp 00:11:18.478 adrfam: ipv4 00:11:18.478 subtype: nvme subsystem 00:11:18.478 treq: not required 00:11:18.478 portid: 0 00:11:18.478 trsvcid: 4420 00:11:18.478 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:18.478 traddr: 10.0.0.2 00:11:18.478 eflags: none 00:11:18.478 sectype: none 00:11:18.478 =====Discovery Log Entry 3====== 00:11:18.478 trtype: tcp 00:11:18.478 adrfam: ipv4 00:11:18.478 subtype: nvme subsystem 00:11:18.478 treq: not required 00:11:18.478 portid: 0 00:11:18.478 trsvcid: 4420 00:11:18.478 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:18.478 traddr: 10.0.0.2 00:11:18.478 eflags: none 00:11:18.478 sectype: none 00:11:18.478 =====Discovery Log Entry 4====== 00:11:18.478 trtype: tcp 00:11:18.478 adrfam: ipv4 00:11:18.478 subtype: nvme subsystem 00:11:18.478 treq: not required 00:11:18.478 portid: 0 00:11:18.478 trsvcid: 4420 00:11:18.478 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:18.478 traddr: 10.0.0.2 00:11:18.478 eflags: none 00:11:18.478 sectype: none 00:11:18.478 =====Discovery Log Entry 5====== 00:11:18.478 trtype: tcp 00:11:18.478 adrfam: ipv4 00:11:18.478 subtype: discovery subsystem referral 00:11:18.478 treq: not required 00:11:18.478 portid: 0 00:11:18.478 trsvcid: 4430 00:11:18.478 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:18.478 traddr: 10.0.0.2 00:11:18.478 eflags: none 00:11:18.478 sectype: none 00:11:18.478 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:18.478 Perform nvmf subsystem discovery via RPC 00:11:18.478 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:18.478 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.478 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.478 [ 00:11:18.478 { 00:11:18.478 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:18.478 "subtype": "Discovery", 00:11:18.478 "listen_addresses": [ 00:11:18.478 { 00:11:18.478 "trtype": "TCP", 00:11:18.478 "adrfam": "IPv4", 00:11:18.478 "traddr": "10.0.0.2", 00:11:18.478 "trsvcid": "4420" 00:11:18.478 } 00:11:18.478 ], 00:11:18.478 "allow_any_host": true, 00:11:18.478 "hosts": [] 00:11:18.478 }, 00:11:18.478 { 00:11:18.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.478 "subtype": "NVMe", 00:11:18.478 "listen_addresses": [ 00:11:18.478 { 00:11:18.478 "trtype": "TCP", 00:11:18.478 "adrfam": "IPv4", 00:11:18.479 "traddr": "10.0.0.2", 00:11:18.479 "trsvcid": "4420" 00:11:18.479 } 00:11:18.479 ], 00:11:18.479 "allow_any_host": true, 00:11:18.479 "hosts": [], 00:11:18.479 "serial_number": "SPDK00000000000001", 00:11:18.479 "model_number": "SPDK bdev Controller", 00:11:18.479 "max_namespaces": 32, 00:11:18.479 "min_cntlid": 1, 00:11:18.479 "max_cntlid": 65519, 00:11:18.479 "namespaces": [ 00:11:18.479 { 00:11:18.479 "nsid": 1, 00:11:18.479 "bdev_name": "Null1", 00:11:18.479 "name": "Null1", 00:11:18.479 "nguid": "C2279A59A03149FA8FB016E3E6D2A7A9", 00:11:18.479 "uuid": "c2279a59-a031-49fa-8fb0-16e3e6d2a7a9" 00:11:18.479 } 00:11:18.479 ] 00:11:18.479 }, 00:11:18.479 { 00:11:18.479 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:18.479 "subtype": "NVMe", 00:11:18.479 "listen_addresses": [ 00:11:18.479 { 00:11:18.479 "trtype": "TCP", 00:11:18.479 "adrfam": "IPv4", 00:11:18.479 "traddr": "10.0.0.2", 00:11:18.479 "trsvcid": "4420" 00:11:18.479 } 00:11:18.479 ], 00:11:18.479 "allow_any_host": true, 00:11:18.479 "hosts": [], 00:11:18.479 "serial_number": "SPDK00000000000002", 00:11:18.479 "model_number": "SPDK bdev Controller", 00:11:18.479 "max_namespaces": 32, 00:11:18.479 "min_cntlid": 1, 00:11:18.479 "max_cntlid": 65519, 00:11:18.479 "namespaces": [ 00:11:18.479 { 00:11:18.479 "nsid": 1, 00:11:18.479 "bdev_name": "Null2", 00:11:18.479 "name": "Null2", 00:11:18.479 "nguid": "7B5F07B9F50B4151AE4217974CC96355", 00:11:18.479 "uuid": "7b5f07b9-f50b-4151-ae42-17974cc96355" 00:11:18.479 } 00:11:18.479 ] 00:11:18.479 }, 00:11:18.479 { 00:11:18.479 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:18.479 "subtype": "NVMe", 00:11:18.479 "listen_addresses": [ 00:11:18.479 { 00:11:18.479 "trtype": "TCP", 00:11:18.479 "adrfam": "IPv4", 00:11:18.479 "traddr": "10.0.0.2", 00:11:18.479 "trsvcid": "4420" 00:11:18.479 } 00:11:18.479 ], 00:11:18.479 "allow_any_host": true, 00:11:18.479 "hosts": [], 00:11:18.479 "serial_number": "SPDK00000000000003", 00:11:18.479 "model_number": "SPDK bdev Controller", 00:11:18.479 "max_namespaces": 32, 00:11:18.479 "min_cntlid": 1, 00:11:18.479 "max_cntlid": 65519, 00:11:18.479 "namespaces": [ 00:11:18.479 { 00:11:18.479 "nsid": 1, 00:11:18.479 "bdev_name": "Null3", 00:11:18.479 "name": "Null3", 00:11:18.479 "nguid": "764D90C9E9344A38B064324CA6BC6B95", 00:11:18.479 "uuid": "764d90c9-e934-4a38-b064-324ca6bc6b95" 00:11:18.479 } 00:11:18.479 ] 00:11:18.479 }, 00:11:18.479 { 00:11:18.479 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:18.479 "subtype": "NVMe", 00:11:18.479 "listen_addresses": [ 00:11:18.479 { 00:11:18.479 "trtype": "TCP", 00:11:18.479 "adrfam": "IPv4", 00:11:18.479 "traddr": "10.0.0.2", 00:11:18.479 "trsvcid": "4420" 00:11:18.479 } 00:11:18.479 ], 00:11:18.479 "allow_any_host": true, 00:11:18.479 "hosts": [], 00:11:18.479 "serial_number": "SPDK00000000000004", 00:11:18.479 "model_number": "SPDK bdev Controller", 00:11:18.479 "max_namespaces": 32, 00:11:18.479 "min_cntlid": 1, 00:11:18.479 "max_cntlid": 65519, 00:11:18.479 "namespaces": [ 00:11:18.479 { 00:11:18.479 "nsid": 1, 00:11:18.479 "bdev_name": "Null4", 00:11:18.479 "name": "Null4", 00:11:18.479 "nguid": "C7AAF109522145DCB8F27532E78376F2", 00:11:18.479 "uuid": "c7aaf109-5221-45dc-b8f2-7532e78376f2" 00:11:18.479 } 00:11:18.479 ] 00:11:18.479 } 00:11:18.479 ] 00:11:18.479 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.479 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:18.479 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:18.479 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.479 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.479 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.479 rmmod nvme_tcp 00:11:18.479 rmmod nvme_fabrics 00:11:18.479 rmmod nvme_keyring 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 2298296 ']' 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 2298296 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2298296 ']' 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2298296 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.479 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2298296 00:11:18.739 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:18.739 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:18.739 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2298296' 00:11:18.739 killing process with pid 2298296 00:11:18.739 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2298296 00:11:18.739 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2298296 00:11:18.999 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:18.999 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:19.000 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:19.000 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:19.000 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:19.000 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:19.000 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:19.000 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.000 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:19.000 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.000 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.000 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.913 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:20.913 00:11:20.913 real 0m5.657s 00:11:20.913 user 0m4.782s 00:11:20.913 sys 0m1.888s 00:11:20.913 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.913 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.913 ************************************ 00:11:20.913 END TEST nvmf_target_discovery 00:11:20.913 ************************************ 00:11:20.913 16:39:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:20.913 16:39:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:20.913 16:39:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.913 16:39:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.913 ************************************ 00:11:20.913 START TEST nvmf_referrals 00:11:20.913 ************************************ 00:11:20.913 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:20.913 * Looking for test storage... 00:11:20.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.913 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:20.913 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:20.913 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.172 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:21.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.172 --rc genhtml_branch_coverage=1 00:11:21.172 --rc genhtml_function_coverage=1 00:11:21.172 --rc genhtml_legend=1 00:11:21.172 --rc geninfo_all_blocks=1 00:11:21.172 --rc geninfo_unexecuted_blocks=1 00:11:21.172 00:11:21.173 ' 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:21.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.173 --rc genhtml_branch_coverage=1 00:11:21.173 --rc genhtml_function_coverage=1 00:11:21.173 --rc genhtml_legend=1 00:11:21.173 --rc geninfo_all_blocks=1 00:11:21.173 --rc geninfo_unexecuted_blocks=1 00:11:21.173 00:11:21.173 ' 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:21.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.173 --rc genhtml_branch_coverage=1 00:11:21.173 --rc genhtml_function_coverage=1 00:11:21.173 --rc genhtml_legend=1 00:11:21.173 --rc geninfo_all_blocks=1 00:11:21.173 --rc geninfo_unexecuted_blocks=1 00:11:21.173 00:11:21.173 ' 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:21.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.173 --rc genhtml_branch_coverage=1 00:11:21.173 --rc genhtml_function_coverage=1 00:11:21.173 --rc genhtml_legend=1 00:11:21.173 --rc geninfo_all_blocks=1 00:11:21.173 --rc geninfo_unexecuted_blocks=1 00:11:21.173 00:11:21.173 ' 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:21.173 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:23.082 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:23.082 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:23.082 Found net devices under 0000:09:00.0: cvl_0_0 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:23.082 Found net devices under 0000:09:00.1: cvl_0_1 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:23.082 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.083 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.083 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:23.083 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:23.083 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.083 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:23.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:11:23.341 00:11:23.341 --- 10.0.0.2 ping statistics --- 00:11:23.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.341 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:11:23.341 00:11:23.341 --- 10.0.0.1 ping statistics --- 00:11:23.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.341 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=2300397 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 2300397 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2300397 ']' 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.341 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.341 [2024-10-17 16:39:36.983625] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:11:23.341 [2024-10-17 16:39:36.983698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.601 [2024-10-17 16:39:37.049384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.601 [2024-10-17 16:39:37.107252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.601 [2024-10-17 16:39:37.107317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.601 [2024-10-17 16:39:37.107330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.601 [2024-10-17 16:39:37.107355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.601 [2024-10-17 16:39:37.107365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.601 [2024-10-17 16:39:37.108891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.601 [2024-10-17 16:39:37.108915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.601 [2024-10-17 16:39:37.108971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.601 [2024-10-17 16:39:37.108974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.601 [2024-10-17 16:39:37.250550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.601 [2024-10-17 16:39:37.262771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.601 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:23.861 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:24.121 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:24.381 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:24.639 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:24.897 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:25.155 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:25.155 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:25.155 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:25.155 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:25.155 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:25.155 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.155 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:25.155 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:25.155 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.415 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.415 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:25.415 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:25.415 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.415 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:25.415 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.415 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.415 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.676 rmmod nvme_tcp 00:11:25.676 rmmod nvme_fabrics 00:11:25.676 rmmod nvme_keyring 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 2300397 ']' 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 2300397 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2300397 ']' 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2300397 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2300397 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2300397' 00:11:25.676 killing process with pid 2300397 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2300397 00:11:25.676 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2300397 00:11:25.935 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:25.935 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:25.935 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:25.935 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:25.935 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:25.935 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:25.935 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:25.935 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.935 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.935 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.935 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.935 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.475 00:11:28.475 real 0m7.038s 00:11:28.475 user 0m11.056s 00:11:28.475 sys 0m2.294s 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.475 ************************************ 00:11:28.475 END TEST nvmf_referrals 00:11:28.475 ************************************ 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.475 ************************************ 00:11:28.475 START TEST nvmf_connect_disconnect 00:11:28.475 ************************************ 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:28.475 * Looking for test storage... 00:11:28.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.475 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:28.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.476 --rc genhtml_branch_coverage=1 00:11:28.476 --rc genhtml_function_coverage=1 00:11:28.476 --rc genhtml_legend=1 00:11:28.476 --rc geninfo_all_blocks=1 00:11:28.476 --rc geninfo_unexecuted_blocks=1 00:11:28.476 00:11:28.476 ' 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:28.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.476 --rc genhtml_branch_coverage=1 00:11:28.476 --rc genhtml_function_coverage=1 00:11:28.476 --rc genhtml_legend=1 00:11:28.476 --rc geninfo_all_blocks=1 00:11:28.476 --rc geninfo_unexecuted_blocks=1 00:11:28.476 00:11:28.476 ' 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:28.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.476 --rc genhtml_branch_coverage=1 00:11:28.476 --rc genhtml_function_coverage=1 00:11:28.476 --rc genhtml_legend=1 00:11:28.476 --rc geninfo_all_blocks=1 00:11:28.476 --rc geninfo_unexecuted_blocks=1 00:11:28.476 00:11:28.476 ' 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:28.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.476 --rc genhtml_branch_coverage=1 00:11:28.476 --rc genhtml_function_coverage=1 00:11:28.476 --rc genhtml_legend=1 00:11:28.476 --rc geninfo_all_blocks=1 00:11:28.476 --rc geninfo_unexecuted_blocks=1 00:11:28.476 00:11:28.476 ' 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.476 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.477 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:28.477 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:28.477 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:28.477 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:30.395 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:30.396 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:30.396 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:30.396 Found net devices under 0000:09:00.0: cvl_0_0 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:30.396 Found net devices under 0000:09:00.1: cvl_0_1 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:30.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:11:30.396 00:11:30.396 --- 10.0.0.2 ping statistics --- 00:11:30.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.396 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:11:30.396 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:11:30.396 00:11:30.396 --- 10.0.0.1 ping statistics --- 00:11:30.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.396 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=2302699 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 2302699 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2302699 ']' 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:30.396 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:30.657 [2024-10-17 16:39:44.093293] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:11:30.657 [2024-10-17 16:39:44.093405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.657 [2024-10-17 16:39:44.160454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.657 [2024-10-17 16:39:44.221362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.657 [2024-10-17 16:39:44.221419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.657 [2024-10-17 16:39:44.221448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.657 [2024-10-17 16:39:44.221459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.657 [2024-10-17 16:39:44.221469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.657 [2024-10-17 16:39:44.223107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.657 [2024-10-17 16:39:44.223131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.657 [2024-10-17 16:39:44.223191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.657 [2024-10-17 16:39:44.223194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.657 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.657 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:30.657 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:30.657 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:30.916 [2024-10-17 16:39:44.379459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:30.916 [2024-10-17 16:39:44.448201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:30.916 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:34.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.106 rmmod nvme_tcp 00:11:45.106 rmmod nvme_fabrics 00:11:45.106 rmmod nvme_keyring 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 2302699 ']' 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 2302699 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2302699 ']' 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2302699 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2302699 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2302699' 00:11:45.106 killing process with pid 2302699 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2302699 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2302699 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.106 16:39:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.094 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:47.094 00:11:47.094 real 0m18.980s 00:11:47.094 user 0m56.836s 00:11:47.094 sys 0m3.495s 00:11:47.094 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:47.094 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.094 ************************************ 00:11:47.094 END TEST nvmf_connect_disconnect 00:11:47.094 ************************************ 00:11:47.094 16:40:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:47.094 16:40:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:47.094 16:40:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:47.094 16:40:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.094 ************************************ 00:11:47.094 START TEST nvmf_multitarget 00:11:47.094 ************************************ 00:11:47.094 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:47.094 * Looking for test storage... 00:11:47.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.094 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:47.094 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:11:47.094 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:47.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.355 --rc genhtml_branch_coverage=1 00:11:47.355 --rc genhtml_function_coverage=1 00:11:47.355 --rc genhtml_legend=1 00:11:47.355 --rc geninfo_all_blocks=1 00:11:47.355 --rc geninfo_unexecuted_blocks=1 00:11:47.355 00:11:47.355 ' 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:47.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.355 --rc genhtml_branch_coverage=1 00:11:47.355 --rc genhtml_function_coverage=1 00:11:47.355 --rc genhtml_legend=1 00:11:47.355 --rc geninfo_all_blocks=1 00:11:47.355 --rc geninfo_unexecuted_blocks=1 00:11:47.355 00:11:47.355 ' 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:47.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.355 --rc genhtml_branch_coverage=1 00:11:47.355 --rc genhtml_function_coverage=1 00:11:47.355 --rc genhtml_legend=1 00:11:47.355 --rc geninfo_all_blocks=1 00:11:47.355 --rc geninfo_unexecuted_blocks=1 00:11:47.355 00:11:47.355 ' 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:47.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.355 --rc genhtml_branch_coverage=1 00:11:47.355 --rc genhtml_function_coverage=1 00:11:47.355 --rc genhtml_legend=1 00:11:47.355 --rc geninfo_all_blocks=1 00:11:47.355 --rc geninfo_unexecuted_blocks=1 00:11:47.355 00:11:47.355 ' 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.355 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:47.356 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.260 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:49.261 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:49.261 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:49.261 Found net devices under 0000:09:00.0: cvl_0_0 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:49.261 Found net devices under 0000:09:00.1: cvl_0_1 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.261 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:49.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:11:49.519 00:11:49.519 --- 10.0.0.2 ping statistics --- 00:11:49.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.519 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:11:49.519 00:11:49.519 --- 10.0.0.1 ping statistics --- 00:11:49.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.519 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=2306429 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 2306429 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2306429 ']' 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:49.519 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:49.519 [2024-10-17 16:40:03.038467] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:11:49.519 [2024-10-17 16:40:03.038537] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.519 [2024-10-17 16:40:03.105466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.519 [2024-10-17 16:40:03.170489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.519 [2024-10-17 16:40:03.170554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.519 [2024-10-17 16:40:03.170578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.519 [2024-10-17 16:40:03.170599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.519 [2024-10-17 16:40:03.170611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.519 [2024-10-17 16:40:03.172275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.519 [2024-10-17 16:40:03.172337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.519 [2024-10-17 16:40:03.172388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.519 [2024-10-17 16:40:03.172392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.778 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:49.778 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:11:49.778 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:49.778 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:49.778 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:49.778 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.778 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:49.778 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:49.778 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:49.778 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:49.778 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:50.036 "nvmf_tgt_1" 00:11:50.036 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:50.036 "nvmf_tgt_2" 00:11:50.036 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:50.036 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:50.293 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:50.293 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:50.293 true 00:11:50.293 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:50.552 true 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:50.552 rmmod nvme_tcp 00:11:50.552 rmmod nvme_fabrics 00:11:50.552 rmmod nvme_keyring 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 2306429 ']' 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 2306429 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2306429 ']' 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2306429 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2306429 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2306429' 00:11:50.552 killing process with pid 2306429 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2306429 00:11:50.552 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2306429 00:11:50.811 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:50.811 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:50.811 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:50.811 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:50.811 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:50.811 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:11:50.811 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:11:50.811 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.811 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:50.811 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.811 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.811 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:53.352 00:11:53.352 real 0m5.844s 00:11:53.352 user 0m6.686s 00:11:53.352 sys 0m1.943s 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:53.352 ************************************ 00:11:53.352 END TEST nvmf_multitarget 00:11:53.352 ************************************ 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:53.352 ************************************ 00:11:53.352 START TEST nvmf_rpc 00:11:53.352 ************************************ 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:53.352 * Looking for test storage... 00:11:53.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:53.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.352 --rc genhtml_branch_coverage=1 00:11:53.352 --rc genhtml_function_coverage=1 00:11:53.352 --rc genhtml_legend=1 00:11:53.352 --rc geninfo_all_blocks=1 00:11:53.352 --rc geninfo_unexecuted_blocks=1 00:11:53.352 00:11:53.352 ' 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:53.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.352 --rc genhtml_branch_coverage=1 00:11:53.352 --rc genhtml_function_coverage=1 00:11:53.352 --rc genhtml_legend=1 00:11:53.352 --rc geninfo_all_blocks=1 00:11:53.352 --rc geninfo_unexecuted_blocks=1 00:11:53.352 00:11:53.352 ' 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:53.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.352 --rc genhtml_branch_coverage=1 00:11:53.352 --rc genhtml_function_coverage=1 00:11:53.352 --rc genhtml_legend=1 00:11:53.352 --rc geninfo_all_blocks=1 00:11:53.352 --rc geninfo_unexecuted_blocks=1 00:11:53.352 00:11:53.352 ' 00:11:53.352 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:53.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.352 --rc genhtml_branch_coverage=1 00:11:53.352 --rc genhtml_function_coverage=1 00:11:53.352 --rc genhtml_legend=1 00:11:53.352 --rc geninfo_all_blocks=1 00:11:53.353 --rc geninfo_unexecuted_blocks=1 00:11:53.353 00:11:53.353 ' 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:53.353 16:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:55.262 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:55.262 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:55.262 Found net devices under 0000:09:00.0: cvl_0_0 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:55.262 Found net devices under 0000:09:00.1: cvl_0_1 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:55.262 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:55.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:11:55.263 00:11:55.263 --- 10.0.0.2 ping statistics --- 00:11:55.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.263 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:55.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:11:55.263 00:11:55.263 --- 10.0.0.1 ping statistics --- 00:11:55.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.263 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=2308583 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 2308583 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2308583 ']' 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:55.263 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.263 [2024-10-17 16:40:08.873573] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:11:55.263 [2024-10-17 16:40:08.873668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.263 [2024-10-17 16:40:08.945928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.521 [2024-10-17 16:40:09.009020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.521 [2024-10-17 16:40:09.009072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.521 [2024-10-17 16:40:09.009086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.521 [2024-10-17 16:40:09.009098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.521 [2024-10-17 16:40:09.009108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.521 [2024-10-17 16:40:09.010696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.521 [2024-10-17 16:40:09.010757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.522 [2024-10-17 16:40:09.010823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.522 [2024-10-17 16:40:09.010827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:55.522 "tick_rate": 2700000000, 00:11:55.522 "poll_groups": [ 00:11:55.522 { 00:11:55.522 "name": "nvmf_tgt_poll_group_000", 00:11:55.522 "admin_qpairs": 0, 00:11:55.522 "io_qpairs": 0, 00:11:55.522 "current_admin_qpairs": 0, 00:11:55.522 "current_io_qpairs": 0, 00:11:55.522 "pending_bdev_io": 0, 00:11:55.522 "completed_nvme_io": 0, 00:11:55.522 "transports": [] 00:11:55.522 }, 00:11:55.522 { 00:11:55.522 "name": "nvmf_tgt_poll_group_001", 00:11:55.522 "admin_qpairs": 0, 00:11:55.522 "io_qpairs": 0, 00:11:55.522 "current_admin_qpairs": 0, 00:11:55.522 "current_io_qpairs": 0, 00:11:55.522 "pending_bdev_io": 0, 00:11:55.522 "completed_nvme_io": 0, 00:11:55.522 "transports": [] 00:11:55.522 }, 00:11:55.522 { 00:11:55.522 "name": "nvmf_tgt_poll_group_002", 00:11:55.522 "admin_qpairs": 0, 00:11:55.522 "io_qpairs": 0, 00:11:55.522 "current_admin_qpairs": 0, 00:11:55.522 "current_io_qpairs": 0, 00:11:55.522 "pending_bdev_io": 0, 00:11:55.522 "completed_nvme_io": 0, 00:11:55.522 "transports": [] 00:11:55.522 }, 00:11:55.522 { 00:11:55.522 "name": "nvmf_tgt_poll_group_003", 00:11:55.522 "admin_qpairs": 0, 00:11:55.522 "io_qpairs": 0, 00:11:55.522 "current_admin_qpairs": 0, 00:11:55.522 "current_io_qpairs": 0, 00:11:55.522 "pending_bdev_io": 0, 00:11:55.522 "completed_nvme_io": 0, 00:11:55.522 "transports": [] 00:11:55.522 } 00:11:55.522 ] 00:11:55.522 }' 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:55.522 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.780 [2024-10-17 16:40:09.252959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:55.780 "tick_rate": 2700000000, 00:11:55.780 "poll_groups": [ 00:11:55.780 { 00:11:55.780 "name": "nvmf_tgt_poll_group_000", 00:11:55.780 "admin_qpairs": 0, 00:11:55.780 "io_qpairs": 0, 00:11:55.780 "current_admin_qpairs": 0, 00:11:55.780 "current_io_qpairs": 0, 00:11:55.780 "pending_bdev_io": 0, 00:11:55.780 "completed_nvme_io": 0, 00:11:55.780 "transports": [ 00:11:55.780 { 00:11:55.780 "trtype": "TCP" 00:11:55.780 } 00:11:55.780 ] 00:11:55.780 }, 00:11:55.780 { 00:11:55.780 "name": "nvmf_tgt_poll_group_001", 00:11:55.780 "admin_qpairs": 0, 00:11:55.780 "io_qpairs": 0, 00:11:55.780 "current_admin_qpairs": 0, 00:11:55.780 "current_io_qpairs": 0, 00:11:55.780 "pending_bdev_io": 0, 00:11:55.780 "completed_nvme_io": 0, 00:11:55.780 "transports": [ 00:11:55.780 { 00:11:55.780 "trtype": "TCP" 00:11:55.780 } 00:11:55.780 ] 00:11:55.780 }, 00:11:55.780 { 00:11:55.780 "name": "nvmf_tgt_poll_group_002", 00:11:55.780 "admin_qpairs": 0, 00:11:55.780 "io_qpairs": 0, 00:11:55.780 "current_admin_qpairs": 0, 00:11:55.780 "current_io_qpairs": 0, 00:11:55.780 "pending_bdev_io": 0, 00:11:55.780 "completed_nvme_io": 0, 00:11:55.780 "transports": [ 00:11:55.780 { 00:11:55.780 "trtype": "TCP" 00:11:55.780 } 00:11:55.780 ] 00:11:55.780 }, 00:11:55.780 { 00:11:55.780 "name": "nvmf_tgt_poll_group_003", 00:11:55.780 "admin_qpairs": 0, 00:11:55.780 "io_qpairs": 0, 00:11:55.780 "current_admin_qpairs": 0, 00:11:55.780 "current_io_qpairs": 0, 00:11:55.780 "pending_bdev_io": 0, 00:11:55.780 "completed_nvme_io": 0, 00:11:55.780 "transports": [ 00:11:55.780 { 00:11:55.780 "trtype": "TCP" 00:11:55.780 } 00:11:55.780 ] 00:11:55.780 } 00:11:55.780 ] 00:11:55.780 }' 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:55.780 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.781 Malloc1 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.781 [2024-10-17 16:40:09.432599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:55.781 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:55.781 [2024-10-17 16:40:09.455276] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:11:56.040 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:56.040 could not add new controller: failed to write to nvme-fabrics device 00:11:56.040 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:56.040 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:56.040 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:56.040 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:56.040 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:56.040 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.040 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.040 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.040 16:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.607 16:40:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:56.607 16:40:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:56.607 16:40:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.607 16:40:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:56.607 16:40:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:58.512 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.772 [2024-10-17 16:40:12.206714] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:11:58.772 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:58.772 could not add new controller: failed to write to nvme-fabrics device 00:11:58.772 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:58.772 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:58.772 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:58.772 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:58.772 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:58.772 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.772 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.772 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.772 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.340 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.340 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:59.340 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.340 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:59.340 16:40:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:01.246 16:40:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:01.246 16:40:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:01.246 16:40:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.506 16:40:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:01.506 16:40:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.506 16:40:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:01.506 16:40:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.506 [2024-10-17 16:40:15.090336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.506 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.445 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.445 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:02.445 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.445 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:02.445 16:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.352 16:40:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.352 [2024-10-17 16:40:18.004131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.352 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.352 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:04.352 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.352 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.352 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.352 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.352 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.352 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.352 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.352 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.289 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:05.289 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:05.289 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.289 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:05.289 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.199 [2024-10-17 16:40:20.833668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.199 16:40:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.136 16:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.136 16:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:08.136 16:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.136 16:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:08.136 16:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:10.042 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:10.042 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:10.042 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.042 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:10.042 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.042 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:10.042 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.042 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.042 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:10.042 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:10.043 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.043 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:10.043 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.043 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:10.043 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:10.043 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.043 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.302 [2024-10-17 16:40:23.754169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.302 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.303 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.872 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.872 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:10.873 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.873 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:10.873 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:12.774 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:12.774 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:12.774 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.774 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:12.774 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.774 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:12.774 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.032 [2024-10-17 16:40:26.535454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.032 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.601 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.601 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:13.601 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.601 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:13.601 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.140 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 [2024-10-17 16:40:29.339572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 [2024-10-17 16:40:29.387651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 [2024-10-17 16:40:29.435801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 [2024-10-17 16:40:29.483955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.141 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.142 [2024-10-17 16:40:29.532165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:16.142 "tick_rate": 2700000000, 00:12:16.142 "poll_groups": [ 00:12:16.142 { 00:12:16.142 "name": "nvmf_tgt_poll_group_000", 00:12:16.142 "admin_qpairs": 2, 00:12:16.142 "io_qpairs": 84, 00:12:16.142 "current_admin_qpairs": 0, 00:12:16.142 "current_io_qpairs": 0, 00:12:16.142 "pending_bdev_io": 0, 00:12:16.142 "completed_nvme_io": 183, 00:12:16.142 "transports": [ 00:12:16.142 { 00:12:16.142 "trtype": "TCP" 00:12:16.142 } 00:12:16.142 ] 00:12:16.142 }, 00:12:16.142 { 00:12:16.142 "name": "nvmf_tgt_poll_group_001", 00:12:16.142 "admin_qpairs": 2, 00:12:16.142 "io_qpairs": 84, 00:12:16.142 "current_admin_qpairs": 0, 00:12:16.142 "current_io_qpairs": 0, 00:12:16.142 "pending_bdev_io": 0, 00:12:16.142 "completed_nvme_io": 185, 00:12:16.142 "transports": [ 00:12:16.142 { 00:12:16.142 "trtype": "TCP" 00:12:16.142 } 00:12:16.142 ] 00:12:16.142 }, 00:12:16.142 { 00:12:16.142 "name": "nvmf_tgt_poll_group_002", 00:12:16.142 "admin_qpairs": 1, 00:12:16.142 "io_qpairs": 84, 00:12:16.142 "current_admin_qpairs": 0, 00:12:16.142 "current_io_qpairs": 0, 00:12:16.142 "pending_bdev_io": 0, 00:12:16.142 "completed_nvme_io": 134, 00:12:16.142 "transports": [ 00:12:16.142 { 00:12:16.142 "trtype": "TCP" 00:12:16.142 } 00:12:16.142 ] 00:12:16.142 }, 00:12:16.142 { 00:12:16.142 "name": "nvmf_tgt_poll_group_003", 00:12:16.142 "admin_qpairs": 2, 00:12:16.142 "io_qpairs": 84, 00:12:16.142 "current_admin_qpairs": 0, 00:12:16.142 "current_io_qpairs": 0, 00:12:16.142 "pending_bdev_io": 0, 00:12:16.142 "completed_nvme_io": 184, 00:12:16.142 "transports": [ 00:12:16.142 { 00:12:16.142 "trtype": "TCP" 00:12:16.142 } 00:12:16.142 ] 00:12:16.142 } 00:12:16.142 ] 00:12:16.142 }' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.142 rmmod nvme_tcp 00:12:16.142 rmmod nvme_fabrics 00:12:16.142 rmmod nvme_keyring 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 2308583 ']' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 2308583 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2308583 ']' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2308583 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2308583 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2308583' 00:12:16.142 killing process with pid 2308583 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2308583 00:12:16.142 16:40:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2308583 00:12:16.403 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:16.403 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:16.403 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:16.403 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:16.403 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:12:16.403 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:16.403 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:12:16.403 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:16.403 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:16.403 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.403 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.403 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:18.948 00:12:18.948 real 0m25.520s 00:12:18.948 user 1m23.409s 00:12:18.948 sys 0m4.046s 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.948 ************************************ 00:12:18.948 END TEST nvmf_rpc 00:12:18.948 ************************************ 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:18.948 ************************************ 00:12:18.948 START TEST nvmf_invalid 00:12:18.948 ************************************ 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:18.948 * Looking for test storage... 00:12:18.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.948 --rc genhtml_branch_coverage=1 00:12:18.948 --rc genhtml_function_coverage=1 00:12:18.948 --rc genhtml_legend=1 00:12:18.948 --rc geninfo_all_blocks=1 00:12:18.948 --rc geninfo_unexecuted_blocks=1 00:12:18.948 00:12:18.948 ' 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.948 --rc genhtml_branch_coverage=1 00:12:18.948 --rc genhtml_function_coverage=1 00:12:18.948 --rc genhtml_legend=1 00:12:18.948 --rc geninfo_all_blocks=1 00:12:18.948 --rc geninfo_unexecuted_blocks=1 00:12:18.948 00:12:18.948 ' 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.948 --rc genhtml_branch_coverage=1 00:12:18.948 --rc genhtml_function_coverage=1 00:12:18.948 --rc genhtml_legend=1 00:12:18.948 --rc geninfo_all_blocks=1 00:12:18.948 --rc geninfo_unexecuted_blocks=1 00:12:18.948 00:12:18.948 ' 00:12:18.948 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.948 --rc genhtml_branch_coverage=1 00:12:18.948 --rc genhtml_function_coverage=1 00:12:18.948 --rc genhtml_legend=1 00:12:18.949 --rc geninfo_all_blocks=1 00:12:18.949 --rc geninfo_unexecuted_blocks=1 00:12:18.949 00:12:18.949 ' 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:18.949 16:40:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:20.857 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:20.857 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:20.857 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:20.858 Found net devices under 0000:09:00.0: cvl_0_0 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:20.858 Found net devices under 0000:09:00.1: cvl_0_1 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:12:20.858 00:12:20.858 --- 10.0.0.2 ping statistics --- 00:12:20.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.858 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:12:20.858 00:12:20.858 --- 10.0.0.1 ping statistics --- 00:12:20.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.858 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:20.858 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:21.116 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:21.116 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:21.116 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:21.116 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:21.116 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=2313097 00:12:21.116 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 2313097 00:12:21.116 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2313097 ']' 00:12:21.116 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.116 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.117 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.117 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.117 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.117 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:21.117 [2024-10-17 16:40:34.612412] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:12:21.117 [2024-10-17 16:40:34.612504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.117 [2024-10-17 16:40:34.681208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.117 [2024-10-17 16:40:34.746430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.117 [2024-10-17 16:40:34.746483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.117 [2024-10-17 16:40:34.746499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.117 [2024-10-17 16:40:34.746512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.117 [2024-10-17 16:40:34.746524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.117 [2024-10-17 16:40:34.752024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.117 [2024-10-17 16:40:34.752061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.117 [2024-10-17 16:40:34.752113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.117 [2024-10-17 16:40:34.752117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.375 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:21.375 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:12:21.375 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:21.375 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:21.375 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:21.375 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.375 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:21.375 16:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6093 00:12:21.633 [2024-10-17 16:40:35.161635] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:21.633 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:21.633 { 00:12:21.633 "nqn": "nqn.2016-06.io.spdk:cnode6093", 00:12:21.633 "tgt_name": "foobar", 00:12:21.633 "method": "nvmf_create_subsystem", 00:12:21.633 "req_id": 1 00:12:21.633 } 00:12:21.633 Got JSON-RPC error response 00:12:21.633 response: 00:12:21.633 { 00:12:21.633 "code": -32603, 00:12:21.633 "message": "Unable to find target foobar" 00:12:21.633 }' 00:12:21.633 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:21.633 { 00:12:21.633 "nqn": "nqn.2016-06.io.spdk:cnode6093", 00:12:21.633 "tgt_name": "foobar", 00:12:21.633 "method": "nvmf_create_subsystem", 00:12:21.633 "req_id": 1 00:12:21.633 } 00:12:21.633 Got JSON-RPC error response 00:12:21.633 response: 00:12:21.633 { 00:12:21.633 "code": -32603, 00:12:21.633 "message": "Unable to find target foobar" 00:12:21.633 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:21.633 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:21.633 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24653 00:12:21.892 [2024-10-17 16:40:35.446604] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24653: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:21.892 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:21.892 { 00:12:21.892 "nqn": "nqn.2016-06.io.spdk:cnode24653", 00:12:21.892 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:21.892 "method": "nvmf_create_subsystem", 00:12:21.892 "req_id": 1 00:12:21.892 } 00:12:21.892 Got JSON-RPC error response 00:12:21.892 response: 00:12:21.892 { 00:12:21.892 "code": -32602, 00:12:21.892 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:21.892 }' 00:12:21.892 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:21.892 { 00:12:21.892 "nqn": "nqn.2016-06.io.spdk:cnode24653", 00:12:21.892 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:21.892 "method": "nvmf_create_subsystem", 00:12:21.892 "req_id": 1 00:12:21.892 } 00:12:21.892 Got JSON-RPC error response 00:12:21.892 response: 00:12:21.892 { 00:12:21.892 "code": -32602, 00:12:21.892 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:21.892 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:21.892 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:21.892 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23629 00:12:22.151 [2024-10-17 16:40:35.711461] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23629: invalid model number 'SPDK_Controller' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:22.151 { 00:12:22.151 "nqn": "nqn.2016-06.io.spdk:cnode23629", 00:12:22.151 "model_number": "SPDK_Controller\u001f", 00:12:22.151 "method": "nvmf_create_subsystem", 00:12:22.151 "req_id": 1 00:12:22.151 } 00:12:22.151 Got JSON-RPC error response 00:12:22.151 response: 00:12:22.151 { 00:12:22.151 "code": -32602, 00:12:22.151 "message": "Invalid MN SPDK_Controller\u001f" 00:12:22.151 }' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:22.151 { 00:12:22.151 "nqn": "nqn.2016-06.io.spdk:cnode23629", 00:12:22.151 "model_number": "SPDK_Controller\u001f", 00:12:22.151 "method": "nvmf_create_subsystem", 00:12:22.151 "req_id": 1 00:12:22.151 } 00:12:22.151 Got JSON-RPC error response 00:12:22.151 response: 00:12:22.151 { 00:12:22.151 "code": -32602, 00:12:22.151 "message": "Invalid MN SPDK_Controller\u001f" 00:12:22.151 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.151 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]] 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'p+GzHI74jT@3$&AH(-Gvr' 00:12:22.152 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'p+GzHI74jT@3$&AH(-Gvr' nqn.2016-06.io.spdk:cnode32586 00:12:22.784 [2024-10-17 16:40:36.124847] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32586: invalid serial number 'p+GzHI74jT@3$&AH(-Gvr' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:22.784 { 00:12:22.784 "nqn": "nqn.2016-06.io.spdk:cnode32586", 00:12:22.784 "serial_number": "p+GzHI74jT@3$&AH(-Gvr", 00:12:22.784 "method": "nvmf_create_subsystem", 00:12:22.784 "req_id": 1 00:12:22.784 } 00:12:22.784 Got JSON-RPC error response 00:12:22.784 response: 00:12:22.784 { 00:12:22.784 "code": -32602, 00:12:22.784 "message": "Invalid SN p+GzHI74jT@3$&AH(-Gvr" 00:12:22.784 }' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:22.784 { 00:12:22.784 "nqn": "nqn.2016-06.io.spdk:cnode32586", 00:12:22.784 "serial_number": "p+GzHI74jT@3$&AH(-Gvr", 00:12:22.784 "method": "nvmf_create_subsystem", 00:12:22.784 "req_id": 1 00:12:22.784 } 00:12:22.784 Got JSON-RPC error response 00:12:22.784 response: 00:12:22.784 { 00:12:22.784 "code": -32602, 00:12:22.784 "message": "Invalid SN p+GzHI74jT@3$&AH(-Gvr" 00:12:22.784 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:22.784 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:12:22.785 16:40:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '1mAzlxu4b8/u? /dev/null' 00:12:25.661 16:40:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.568 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:27.568 00:12:27.568 real 0m9.106s 00:12:27.568 user 0m21.791s 00:12:27.568 sys 0m2.493s 00:12:27.568 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.568 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:27.568 ************************************ 00:12:27.568 END TEST nvmf_invalid 00:12:27.568 ************************************ 00:12:27.568 16:40:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:27.568 16:40:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:27.568 16:40:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.568 16:40:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:27.827 ************************************ 00:12:27.827 START TEST nvmf_connect_stress 00:12:27.827 ************************************ 00:12:27.827 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:27.827 * Looking for test storage... 00:12:27.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.827 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:27.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.828 --rc genhtml_branch_coverage=1 00:12:27.828 --rc genhtml_function_coverage=1 00:12:27.828 --rc genhtml_legend=1 00:12:27.828 --rc geninfo_all_blocks=1 00:12:27.828 --rc geninfo_unexecuted_blocks=1 00:12:27.828 00:12:27.828 ' 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:27.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.828 --rc genhtml_branch_coverage=1 00:12:27.828 --rc genhtml_function_coverage=1 00:12:27.828 --rc genhtml_legend=1 00:12:27.828 --rc geninfo_all_blocks=1 00:12:27.828 --rc geninfo_unexecuted_blocks=1 00:12:27.828 00:12:27.828 ' 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:27.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.828 --rc genhtml_branch_coverage=1 00:12:27.828 --rc genhtml_function_coverage=1 00:12:27.828 --rc genhtml_legend=1 00:12:27.828 --rc geninfo_all_blocks=1 00:12:27.828 --rc geninfo_unexecuted_blocks=1 00:12:27.828 00:12:27.828 ' 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:27.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.828 --rc genhtml_branch_coverage=1 00:12:27.828 --rc genhtml_function_coverage=1 00:12:27.828 --rc genhtml_legend=1 00:12:27.828 --rc geninfo_all_blocks=1 00:12:27.828 --rc geninfo_unexecuted_blocks=1 00:12:27.828 00:12:27.828 ' 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.828 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:27.829 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:27.829 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:27.829 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.829 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.829 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.829 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:27.829 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:27.829 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.829 16:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:29.734 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:29.734 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:29.734 Found net devices under 0000:09:00.0: cvl_0_0 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:29.734 Found net devices under 0000:09:00.1: cvl_0_1 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.734 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.735 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:29.735 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:29.735 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.735 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.735 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:29.735 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:29.735 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.735 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:12:29.995 00:12:29.995 --- 10.0.0.2 ping statistics --- 00:12:29.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.995 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:12:29.995 00:12:29.995 --- 10.0.0.1 ping statistics --- 00:12:29.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.995 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:29.995 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=2315747 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 2315747 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2315747 ']' 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.996 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.996 [2024-10-17 16:40:43.620265] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:12:29.996 [2024-10-17 16:40:43.620379] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.996 [2024-10-17 16:40:43.685156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:30.255 [2024-10-17 16:40:43.746395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.255 [2024-10-17 16:40:43.746452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.255 [2024-10-17 16:40:43.746481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.255 [2024-10-17 16:40:43.746493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.255 [2024-10-17 16:40:43.746503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.255 [2024-10-17 16:40:43.748090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.255 [2024-10-17 16:40:43.748142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.255 [2024-10-17 16:40:43.748138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.255 [2024-10-17 16:40:43.903204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.255 [2024-10-17 16:40:43.920517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.255 NULL1 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2315769 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.255 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.514 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.515 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.515 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.515 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.515 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:30.515 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:30.515 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:30.515 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.515 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.515 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.773 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.773 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:30.773 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.773 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.773 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.033 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.033 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:31.033 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.033 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.033 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.294 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.294 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:31.294 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.294 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.294 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.860 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.860 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:31.860 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.860 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.860 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.118 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.118 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:32.118 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.118 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.118 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.378 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.378 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:32.378 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.378 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.378 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.636 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.636 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:32.636 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.636 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.636 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.895 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.895 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:32.895 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.895 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.895 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.463 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.463 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:33.463 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.463 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.463 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.731 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.731 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:33.731 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.731 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.731 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.993 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.993 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:33.993 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.993 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.993 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.253 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.253 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:34.253 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.253 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.253 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:34.513 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.513 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:34.513 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.513 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.513 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.082 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.082 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:35.082 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.082 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.082 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.341 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.341 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:35.341 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.341 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.341 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.601 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.601 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:35.601 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.601 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.601 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.861 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.861 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:35.861 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.861 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.861 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.119 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.119 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:36.119 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.119 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.119 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.690 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.690 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:36.690 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.690 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.690 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.950 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.950 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:36.950 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.950 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.950 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.208 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.208 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:37.208 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.208 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.208 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.467 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.467 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:37.467 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.467 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.467 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.726 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.726 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:37.726 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.726 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.726 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.294 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.294 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:38.294 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.294 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.294 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.554 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.554 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:38.554 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.554 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.554 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.813 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.813 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:38.813 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.813 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.813 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.073 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.073 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:39.073 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.073 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.073 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.333 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.333 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:39.333 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.333 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.333 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.902 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.902 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:39.902 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.902 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.902 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.160 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.160 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:40.160 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.160 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.160 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.420 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.420 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:40.420 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.420 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.420 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.420 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2315769 00:12:40.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2315769) - No such process 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2315769 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.679 rmmod nvme_tcp 00:12:40.679 rmmod nvme_fabrics 00:12:40.679 rmmod nvme_keyring 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 2315747 ']' 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 2315747 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2315747 ']' 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2315747 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.679 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2315747 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2315747' 00:12:40.937 killing process with pid 2315747 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2315747 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2315747 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.937 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:43.480 00:12:43.480 real 0m15.378s 00:12:43.480 user 0m38.801s 00:12:43.480 sys 0m5.685s 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.480 ************************************ 00:12:43.480 END TEST nvmf_connect_stress 00:12:43.480 ************************************ 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.480 ************************************ 00:12:43.480 START TEST nvmf_fused_ordering 00:12:43.480 ************************************ 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:43.480 * Looking for test storage... 00:12:43.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:43.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.480 --rc genhtml_branch_coverage=1 00:12:43.480 --rc genhtml_function_coverage=1 00:12:43.480 --rc genhtml_legend=1 00:12:43.480 --rc geninfo_all_blocks=1 00:12:43.480 --rc geninfo_unexecuted_blocks=1 00:12:43.480 00:12:43.480 ' 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:43.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.480 --rc genhtml_branch_coverage=1 00:12:43.480 --rc genhtml_function_coverage=1 00:12:43.480 --rc genhtml_legend=1 00:12:43.480 --rc geninfo_all_blocks=1 00:12:43.480 --rc geninfo_unexecuted_blocks=1 00:12:43.480 00:12:43.480 ' 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:43.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.480 --rc genhtml_branch_coverage=1 00:12:43.480 --rc genhtml_function_coverage=1 00:12:43.480 --rc genhtml_legend=1 00:12:43.480 --rc geninfo_all_blocks=1 00:12:43.480 --rc geninfo_unexecuted_blocks=1 00:12:43.480 00:12:43.480 ' 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:43.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.480 --rc genhtml_branch_coverage=1 00:12:43.480 --rc genhtml_function_coverage=1 00:12:43.480 --rc genhtml_legend=1 00:12:43.480 --rc geninfo_all_blocks=1 00:12:43.480 --rc geninfo_unexecuted_blocks=1 00:12:43.480 00:12:43.480 ' 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.480 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:43.481 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.383 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.383 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.383 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.383 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.383 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.383 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:45.384 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:45.384 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:45.384 Found net devices under 0000:09:00.0: cvl_0_0 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:45.384 Found net devices under 0000:09:00.1: cvl_0_1 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.384 16:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:12:45.384 00:12:45.384 --- 10.0.0.2 ping statistics --- 00:12:45.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.384 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:12:45.384 00:12:45.384 --- 10.0.0.1 ping statistics --- 00:12:45.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.384 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:45.384 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:45.385 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:45.385 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:45.385 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.643 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=2319041 00:12:45.643 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:45.643 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 2319041 00:12:45.643 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2319041 ']' 00:12:45.643 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.643 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.643 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.643 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.643 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.643 [2024-10-17 16:40:59.123875] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:12:45.643 [2024-10-17 16:40:59.123969] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.643 [2024-10-17 16:40:59.192876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.643 [2024-10-17 16:40:59.255613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.643 [2024-10-17 16:40:59.255679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.643 [2024-10-17 16:40:59.255696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.643 [2024-10-17 16:40:59.255710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.643 [2024-10-17 16:40:59.255721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.643 [2024-10-17 16:40:59.256367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.902 [2024-10-17 16:40:59.400603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.902 [2024-10-17 16:40:59.416803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.902 NULL1 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.902 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:45.902 [2024-10-17 16:40:59.462804] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:12:45.902 [2024-10-17 16:40:59.462847] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319070 ] 00:12:46.471 Attached to nqn.2016-06.io.spdk:cnode1 00:12:46.471 Namespace ID: 1 size: 1GB 00:12:46.471 fused_ordering(0) 00:12:46.471 fused_ordering(1) 00:12:46.471 fused_ordering(2) 00:12:46.471 fused_ordering(3) 00:12:46.471 fused_ordering(4) 00:12:46.471 fused_ordering(5) 00:12:46.471 fused_ordering(6) 00:12:46.471 fused_ordering(7) 00:12:46.471 fused_ordering(8) 00:12:46.471 fused_ordering(9) 00:12:46.471 fused_ordering(10) 00:12:46.471 fused_ordering(11) 00:12:46.471 fused_ordering(12) 00:12:46.471 fused_ordering(13) 00:12:46.471 fused_ordering(14) 00:12:46.471 fused_ordering(15) 00:12:46.471 fused_ordering(16) 00:12:46.471 fused_ordering(17) 00:12:46.471 fused_ordering(18) 00:12:46.471 fused_ordering(19) 00:12:46.471 fused_ordering(20) 00:12:46.471 fused_ordering(21) 00:12:46.471 fused_ordering(22) 00:12:46.471 fused_ordering(23) 00:12:46.471 fused_ordering(24) 00:12:46.471 fused_ordering(25) 00:12:46.471 fused_ordering(26) 00:12:46.471 fused_ordering(27) 00:12:46.471 fused_ordering(28) 00:12:46.471 fused_ordering(29) 00:12:46.471 fused_ordering(30) 00:12:46.471 fused_ordering(31) 00:12:46.471 fused_ordering(32) 00:12:46.471 fused_ordering(33) 00:12:46.471 fused_ordering(34) 00:12:46.471 fused_ordering(35) 00:12:46.471 fused_ordering(36) 00:12:46.471 fused_ordering(37) 00:12:46.471 fused_ordering(38) 00:12:46.471 fused_ordering(39) 00:12:46.471 fused_ordering(40) 00:12:46.471 fused_ordering(41) 00:12:46.471 fused_ordering(42) 00:12:46.471 fused_ordering(43) 00:12:46.471 fused_ordering(44) 00:12:46.471 fused_ordering(45) 00:12:46.471 fused_ordering(46) 00:12:46.471 fused_ordering(47) 00:12:46.471 fused_ordering(48) 00:12:46.471 fused_ordering(49) 00:12:46.471 fused_ordering(50) 00:12:46.471 fused_ordering(51) 00:12:46.471 fused_ordering(52) 00:12:46.471 fused_ordering(53) 00:12:46.471 fused_ordering(54) 00:12:46.471 fused_ordering(55) 00:12:46.471 fused_ordering(56) 00:12:46.471 fused_ordering(57) 00:12:46.471 fused_ordering(58) 00:12:46.471 fused_ordering(59) 00:12:46.471 fused_ordering(60) 00:12:46.471 fused_ordering(61) 00:12:46.471 fused_ordering(62) 00:12:46.471 fused_ordering(63) 00:12:46.471 fused_ordering(64) 00:12:46.471 fused_ordering(65) 00:12:46.471 fused_ordering(66) 00:12:46.471 fused_ordering(67) 00:12:46.471 fused_ordering(68) 00:12:46.471 fused_ordering(69) 00:12:46.471 fused_ordering(70) 00:12:46.471 fused_ordering(71) 00:12:46.471 fused_ordering(72) 00:12:46.471 fused_ordering(73) 00:12:46.471 fused_ordering(74) 00:12:46.471 fused_ordering(75) 00:12:46.471 fused_ordering(76) 00:12:46.471 fused_ordering(77) 00:12:46.471 fused_ordering(78) 00:12:46.471 fused_ordering(79) 00:12:46.471 fused_ordering(80) 00:12:46.471 fused_ordering(81) 00:12:46.471 fused_ordering(82) 00:12:46.471 fused_ordering(83) 00:12:46.471 fused_ordering(84) 00:12:46.471 fused_ordering(85) 00:12:46.471 fused_ordering(86) 00:12:46.471 fused_ordering(87) 00:12:46.471 fused_ordering(88) 00:12:46.471 fused_ordering(89) 00:12:46.471 fused_ordering(90) 00:12:46.471 fused_ordering(91) 00:12:46.471 fused_ordering(92) 00:12:46.471 fused_ordering(93) 00:12:46.471 fused_ordering(94) 00:12:46.471 fused_ordering(95) 00:12:46.471 fused_ordering(96) 00:12:46.471 fused_ordering(97) 00:12:46.471 fused_ordering(98) 00:12:46.471 fused_ordering(99) 00:12:46.471 fused_ordering(100) 00:12:46.471 fused_ordering(101) 00:12:46.471 fused_ordering(102) 00:12:46.471 fused_ordering(103) 00:12:46.471 fused_ordering(104) 00:12:46.471 fused_ordering(105) 00:12:46.471 fused_ordering(106) 00:12:46.471 fused_ordering(107) 00:12:46.471 fused_ordering(108) 00:12:46.471 fused_ordering(109) 00:12:46.471 fused_ordering(110) 00:12:46.471 fused_ordering(111) 00:12:46.471 fused_ordering(112) 00:12:46.471 fused_ordering(113) 00:12:46.471 fused_ordering(114) 00:12:46.471 fused_ordering(115) 00:12:46.471 fused_ordering(116) 00:12:46.471 fused_ordering(117) 00:12:46.471 fused_ordering(118) 00:12:46.471 fused_ordering(119) 00:12:46.471 fused_ordering(120) 00:12:46.471 fused_ordering(121) 00:12:46.472 fused_ordering(122) 00:12:46.472 fused_ordering(123) 00:12:46.472 fused_ordering(124) 00:12:46.472 fused_ordering(125) 00:12:46.472 fused_ordering(126) 00:12:46.472 fused_ordering(127) 00:12:46.472 fused_ordering(128) 00:12:46.472 fused_ordering(129) 00:12:46.472 fused_ordering(130) 00:12:46.472 fused_ordering(131) 00:12:46.472 fused_ordering(132) 00:12:46.472 fused_ordering(133) 00:12:46.472 fused_ordering(134) 00:12:46.472 fused_ordering(135) 00:12:46.472 fused_ordering(136) 00:12:46.472 fused_ordering(137) 00:12:46.472 fused_ordering(138) 00:12:46.472 fused_ordering(139) 00:12:46.472 fused_ordering(140) 00:12:46.472 fused_ordering(141) 00:12:46.472 fused_ordering(142) 00:12:46.472 fused_ordering(143) 00:12:46.472 fused_ordering(144) 00:12:46.472 fused_ordering(145) 00:12:46.472 fused_ordering(146) 00:12:46.472 fused_ordering(147) 00:12:46.472 fused_ordering(148) 00:12:46.472 fused_ordering(149) 00:12:46.472 fused_ordering(150) 00:12:46.472 fused_ordering(151) 00:12:46.472 fused_ordering(152) 00:12:46.472 fused_ordering(153) 00:12:46.472 fused_ordering(154) 00:12:46.472 fused_ordering(155) 00:12:46.472 fused_ordering(156) 00:12:46.472 fused_ordering(157) 00:12:46.472 fused_ordering(158) 00:12:46.472 fused_ordering(159) 00:12:46.472 fused_ordering(160) 00:12:46.472 fused_ordering(161) 00:12:46.472 fused_ordering(162) 00:12:46.472 fused_ordering(163) 00:12:46.472 fused_ordering(164) 00:12:46.472 fused_ordering(165) 00:12:46.472 fused_ordering(166) 00:12:46.472 fused_ordering(167) 00:12:46.472 fused_ordering(168) 00:12:46.472 fused_ordering(169) 00:12:46.472 fused_ordering(170) 00:12:46.472 fused_ordering(171) 00:12:46.472 fused_ordering(172) 00:12:46.472 fused_ordering(173) 00:12:46.472 fused_ordering(174) 00:12:46.472 fused_ordering(175) 00:12:46.472 fused_ordering(176) 00:12:46.472 fused_ordering(177) 00:12:46.472 fused_ordering(178) 00:12:46.472 fused_ordering(179) 00:12:46.472 fused_ordering(180) 00:12:46.472 fused_ordering(181) 00:12:46.472 fused_ordering(182) 00:12:46.472 fused_ordering(183) 00:12:46.472 fused_ordering(184) 00:12:46.472 fused_ordering(185) 00:12:46.472 fused_ordering(186) 00:12:46.472 fused_ordering(187) 00:12:46.472 fused_ordering(188) 00:12:46.472 fused_ordering(189) 00:12:46.472 fused_ordering(190) 00:12:46.472 fused_ordering(191) 00:12:46.472 fused_ordering(192) 00:12:46.472 fused_ordering(193) 00:12:46.472 fused_ordering(194) 00:12:46.472 fused_ordering(195) 00:12:46.472 fused_ordering(196) 00:12:46.472 fused_ordering(197) 00:12:46.472 fused_ordering(198) 00:12:46.472 fused_ordering(199) 00:12:46.472 fused_ordering(200) 00:12:46.472 fused_ordering(201) 00:12:46.472 fused_ordering(202) 00:12:46.472 fused_ordering(203) 00:12:46.472 fused_ordering(204) 00:12:46.472 fused_ordering(205) 00:12:46.730 fused_ordering(206) 00:12:46.730 fused_ordering(207) 00:12:46.730 fused_ordering(208) 00:12:46.730 fused_ordering(209) 00:12:46.730 fused_ordering(210) 00:12:46.730 fused_ordering(211) 00:12:46.730 fused_ordering(212) 00:12:46.730 fused_ordering(213) 00:12:46.730 fused_ordering(214) 00:12:46.730 fused_ordering(215) 00:12:46.730 fused_ordering(216) 00:12:46.730 fused_ordering(217) 00:12:46.730 fused_ordering(218) 00:12:46.730 fused_ordering(219) 00:12:46.730 fused_ordering(220) 00:12:46.730 fused_ordering(221) 00:12:46.730 fused_ordering(222) 00:12:46.730 fused_ordering(223) 00:12:46.730 fused_ordering(224) 00:12:46.730 fused_ordering(225) 00:12:46.730 fused_ordering(226) 00:12:46.730 fused_ordering(227) 00:12:46.730 fused_ordering(228) 00:12:46.730 fused_ordering(229) 00:12:46.730 fused_ordering(230) 00:12:46.730 fused_ordering(231) 00:12:46.730 fused_ordering(232) 00:12:46.730 fused_ordering(233) 00:12:46.730 fused_ordering(234) 00:12:46.730 fused_ordering(235) 00:12:46.730 fused_ordering(236) 00:12:46.730 fused_ordering(237) 00:12:46.730 fused_ordering(238) 00:12:46.730 fused_ordering(239) 00:12:46.730 fused_ordering(240) 00:12:46.730 fused_ordering(241) 00:12:46.730 fused_ordering(242) 00:12:46.730 fused_ordering(243) 00:12:46.730 fused_ordering(244) 00:12:46.730 fused_ordering(245) 00:12:46.730 fused_ordering(246) 00:12:46.730 fused_ordering(247) 00:12:46.730 fused_ordering(248) 00:12:46.730 fused_ordering(249) 00:12:46.730 fused_ordering(250) 00:12:46.730 fused_ordering(251) 00:12:46.730 fused_ordering(252) 00:12:46.730 fused_ordering(253) 00:12:46.730 fused_ordering(254) 00:12:46.730 fused_ordering(255) 00:12:46.730 fused_ordering(256) 00:12:46.730 fused_ordering(257) 00:12:46.730 fused_ordering(258) 00:12:46.730 fused_ordering(259) 00:12:46.730 fused_ordering(260) 00:12:46.730 fused_ordering(261) 00:12:46.730 fused_ordering(262) 00:12:46.730 fused_ordering(263) 00:12:46.730 fused_ordering(264) 00:12:46.730 fused_ordering(265) 00:12:46.730 fused_ordering(266) 00:12:46.730 fused_ordering(267) 00:12:46.730 fused_ordering(268) 00:12:46.730 fused_ordering(269) 00:12:46.730 fused_ordering(270) 00:12:46.730 fused_ordering(271) 00:12:46.730 fused_ordering(272) 00:12:46.730 fused_ordering(273) 00:12:46.731 fused_ordering(274) 00:12:46.731 fused_ordering(275) 00:12:46.731 fused_ordering(276) 00:12:46.731 fused_ordering(277) 00:12:46.731 fused_ordering(278) 00:12:46.731 fused_ordering(279) 00:12:46.731 fused_ordering(280) 00:12:46.731 fused_ordering(281) 00:12:46.731 fused_ordering(282) 00:12:46.731 fused_ordering(283) 00:12:46.731 fused_ordering(284) 00:12:46.731 fused_ordering(285) 00:12:46.731 fused_ordering(286) 00:12:46.731 fused_ordering(287) 00:12:46.731 fused_ordering(288) 00:12:46.731 fused_ordering(289) 00:12:46.731 fused_ordering(290) 00:12:46.731 fused_ordering(291) 00:12:46.731 fused_ordering(292) 00:12:46.731 fused_ordering(293) 00:12:46.731 fused_ordering(294) 00:12:46.731 fused_ordering(295) 00:12:46.731 fused_ordering(296) 00:12:46.731 fused_ordering(297) 00:12:46.731 fused_ordering(298) 00:12:46.731 fused_ordering(299) 00:12:46.731 fused_ordering(300) 00:12:46.731 fused_ordering(301) 00:12:46.731 fused_ordering(302) 00:12:46.731 fused_ordering(303) 00:12:46.731 fused_ordering(304) 00:12:46.731 fused_ordering(305) 00:12:46.731 fused_ordering(306) 00:12:46.731 fused_ordering(307) 00:12:46.731 fused_ordering(308) 00:12:46.731 fused_ordering(309) 00:12:46.731 fused_ordering(310) 00:12:46.731 fused_ordering(311) 00:12:46.731 fused_ordering(312) 00:12:46.731 fused_ordering(313) 00:12:46.731 fused_ordering(314) 00:12:46.731 fused_ordering(315) 00:12:46.731 fused_ordering(316) 00:12:46.731 fused_ordering(317) 00:12:46.731 fused_ordering(318) 00:12:46.731 fused_ordering(319) 00:12:46.731 fused_ordering(320) 00:12:46.731 fused_ordering(321) 00:12:46.731 fused_ordering(322) 00:12:46.731 fused_ordering(323) 00:12:46.731 fused_ordering(324) 00:12:46.731 fused_ordering(325) 00:12:46.731 fused_ordering(326) 00:12:46.731 fused_ordering(327) 00:12:46.731 fused_ordering(328) 00:12:46.731 fused_ordering(329) 00:12:46.731 fused_ordering(330) 00:12:46.731 fused_ordering(331) 00:12:46.731 fused_ordering(332) 00:12:46.731 fused_ordering(333) 00:12:46.731 fused_ordering(334) 00:12:46.731 fused_ordering(335) 00:12:46.731 fused_ordering(336) 00:12:46.731 fused_ordering(337) 00:12:46.731 fused_ordering(338) 00:12:46.731 fused_ordering(339) 00:12:46.731 fused_ordering(340) 00:12:46.731 fused_ordering(341) 00:12:46.731 fused_ordering(342) 00:12:46.731 fused_ordering(343) 00:12:46.731 fused_ordering(344) 00:12:46.731 fused_ordering(345) 00:12:46.731 fused_ordering(346) 00:12:46.731 fused_ordering(347) 00:12:46.731 fused_ordering(348) 00:12:46.731 fused_ordering(349) 00:12:46.731 fused_ordering(350) 00:12:46.731 fused_ordering(351) 00:12:46.731 fused_ordering(352) 00:12:46.731 fused_ordering(353) 00:12:46.731 fused_ordering(354) 00:12:46.731 fused_ordering(355) 00:12:46.731 fused_ordering(356) 00:12:46.731 fused_ordering(357) 00:12:46.731 fused_ordering(358) 00:12:46.731 fused_ordering(359) 00:12:46.731 fused_ordering(360) 00:12:46.731 fused_ordering(361) 00:12:46.731 fused_ordering(362) 00:12:46.731 fused_ordering(363) 00:12:46.731 fused_ordering(364) 00:12:46.731 fused_ordering(365) 00:12:46.731 fused_ordering(366) 00:12:46.731 fused_ordering(367) 00:12:46.731 fused_ordering(368) 00:12:46.731 fused_ordering(369) 00:12:46.731 fused_ordering(370) 00:12:46.731 fused_ordering(371) 00:12:46.731 fused_ordering(372) 00:12:46.731 fused_ordering(373) 00:12:46.731 fused_ordering(374) 00:12:46.731 fused_ordering(375) 00:12:46.731 fused_ordering(376) 00:12:46.731 fused_ordering(377) 00:12:46.731 fused_ordering(378) 00:12:46.731 fused_ordering(379) 00:12:46.731 fused_ordering(380) 00:12:46.731 fused_ordering(381) 00:12:46.731 fused_ordering(382) 00:12:46.731 fused_ordering(383) 00:12:46.731 fused_ordering(384) 00:12:46.731 fused_ordering(385) 00:12:46.731 fused_ordering(386) 00:12:46.731 fused_ordering(387) 00:12:46.731 fused_ordering(388) 00:12:46.731 fused_ordering(389) 00:12:46.731 fused_ordering(390) 00:12:46.731 fused_ordering(391) 00:12:46.731 fused_ordering(392) 00:12:46.731 fused_ordering(393) 00:12:46.731 fused_ordering(394) 00:12:46.731 fused_ordering(395) 00:12:46.731 fused_ordering(396) 00:12:46.731 fused_ordering(397) 00:12:46.731 fused_ordering(398) 00:12:46.731 fused_ordering(399) 00:12:46.731 fused_ordering(400) 00:12:46.731 fused_ordering(401) 00:12:46.731 fused_ordering(402) 00:12:46.731 fused_ordering(403) 00:12:46.731 fused_ordering(404) 00:12:46.731 fused_ordering(405) 00:12:46.731 fused_ordering(406) 00:12:46.731 fused_ordering(407) 00:12:46.731 fused_ordering(408) 00:12:46.731 fused_ordering(409) 00:12:46.731 fused_ordering(410) 00:12:46.990 fused_ordering(411) 00:12:46.990 fused_ordering(412) 00:12:46.990 fused_ordering(413) 00:12:46.990 fused_ordering(414) 00:12:46.990 fused_ordering(415) 00:12:46.990 fused_ordering(416) 00:12:46.990 fused_ordering(417) 00:12:46.990 fused_ordering(418) 00:12:46.990 fused_ordering(419) 00:12:46.990 fused_ordering(420) 00:12:46.990 fused_ordering(421) 00:12:46.990 fused_ordering(422) 00:12:46.990 fused_ordering(423) 00:12:46.990 fused_ordering(424) 00:12:46.990 fused_ordering(425) 00:12:46.990 fused_ordering(426) 00:12:46.990 fused_ordering(427) 00:12:46.990 fused_ordering(428) 00:12:46.990 fused_ordering(429) 00:12:46.990 fused_ordering(430) 00:12:46.990 fused_ordering(431) 00:12:46.990 fused_ordering(432) 00:12:46.990 fused_ordering(433) 00:12:46.990 fused_ordering(434) 00:12:46.990 fused_ordering(435) 00:12:46.990 fused_ordering(436) 00:12:46.990 fused_ordering(437) 00:12:46.990 fused_ordering(438) 00:12:46.990 fused_ordering(439) 00:12:46.990 fused_ordering(440) 00:12:46.990 fused_ordering(441) 00:12:46.990 fused_ordering(442) 00:12:46.990 fused_ordering(443) 00:12:46.990 fused_ordering(444) 00:12:46.990 fused_ordering(445) 00:12:46.990 fused_ordering(446) 00:12:46.990 fused_ordering(447) 00:12:46.990 fused_ordering(448) 00:12:46.990 fused_ordering(449) 00:12:46.990 fused_ordering(450) 00:12:46.991 fused_ordering(451) 00:12:46.991 fused_ordering(452) 00:12:46.991 fused_ordering(453) 00:12:46.991 fused_ordering(454) 00:12:46.991 fused_ordering(455) 00:12:46.991 fused_ordering(456) 00:12:46.991 fused_ordering(457) 00:12:46.991 fused_ordering(458) 00:12:46.991 fused_ordering(459) 00:12:46.991 fused_ordering(460) 00:12:46.991 fused_ordering(461) 00:12:46.991 fused_ordering(462) 00:12:46.991 fused_ordering(463) 00:12:46.991 fused_ordering(464) 00:12:46.991 fused_ordering(465) 00:12:46.991 fused_ordering(466) 00:12:46.991 fused_ordering(467) 00:12:46.991 fused_ordering(468) 00:12:46.991 fused_ordering(469) 00:12:46.991 fused_ordering(470) 00:12:46.991 fused_ordering(471) 00:12:46.991 fused_ordering(472) 00:12:46.991 fused_ordering(473) 00:12:46.991 fused_ordering(474) 00:12:46.991 fused_ordering(475) 00:12:46.991 fused_ordering(476) 00:12:46.991 fused_ordering(477) 00:12:46.991 fused_ordering(478) 00:12:46.991 fused_ordering(479) 00:12:46.991 fused_ordering(480) 00:12:46.991 fused_ordering(481) 00:12:46.991 fused_ordering(482) 00:12:46.991 fused_ordering(483) 00:12:46.991 fused_ordering(484) 00:12:46.991 fused_ordering(485) 00:12:46.991 fused_ordering(486) 00:12:46.991 fused_ordering(487) 00:12:46.991 fused_ordering(488) 00:12:46.991 fused_ordering(489) 00:12:46.991 fused_ordering(490) 00:12:46.991 fused_ordering(491) 00:12:46.991 fused_ordering(492) 00:12:46.991 fused_ordering(493) 00:12:46.991 fused_ordering(494) 00:12:46.991 fused_ordering(495) 00:12:46.991 fused_ordering(496) 00:12:46.991 fused_ordering(497) 00:12:46.991 fused_ordering(498) 00:12:46.991 fused_ordering(499) 00:12:46.991 fused_ordering(500) 00:12:46.991 fused_ordering(501) 00:12:46.991 fused_ordering(502) 00:12:46.991 fused_ordering(503) 00:12:46.991 fused_ordering(504) 00:12:46.991 fused_ordering(505) 00:12:46.991 fused_ordering(506) 00:12:46.991 fused_ordering(507) 00:12:46.991 fused_ordering(508) 00:12:46.991 fused_ordering(509) 00:12:46.991 fused_ordering(510) 00:12:46.991 fused_ordering(511) 00:12:46.991 fused_ordering(512) 00:12:46.991 fused_ordering(513) 00:12:46.991 fused_ordering(514) 00:12:46.991 fused_ordering(515) 00:12:46.991 fused_ordering(516) 00:12:46.991 fused_ordering(517) 00:12:46.991 fused_ordering(518) 00:12:46.991 fused_ordering(519) 00:12:46.991 fused_ordering(520) 00:12:46.991 fused_ordering(521) 00:12:46.991 fused_ordering(522) 00:12:46.991 fused_ordering(523) 00:12:46.991 fused_ordering(524) 00:12:46.991 fused_ordering(525) 00:12:46.991 fused_ordering(526) 00:12:46.991 fused_ordering(527) 00:12:46.991 fused_ordering(528) 00:12:46.991 fused_ordering(529) 00:12:46.991 fused_ordering(530) 00:12:46.991 fused_ordering(531) 00:12:46.991 fused_ordering(532) 00:12:46.991 fused_ordering(533) 00:12:46.991 fused_ordering(534) 00:12:46.991 fused_ordering(535) 00:12:46.991 fused_ordering(536) 00:12:46.991 fused_ordering(537) 00:12:46.991 fused_ordering(538) 00:12:46.991 fused_ordering(539) 00:12:46.991 fused_ordering(540) 00:12:46.991 fused_ordering(541) 00:12:46.991 fused_ordering(542) 00:12:46.991 fused_ordering(543) 00:12:46.991 fused_ordering(544) 00:12:46.991 fused_ordering(545) 00:12:46.991 fused_ordering(546) 00:12:46.991 fused_ordering(547) 00:12:46.991 fused_ordering(548) 00:12:46.991 fused_ordering(549) 00:12:46.991 fused_ordering(550) 00:12:46.991 fused_ordering(551) 00:12:46.991 fused_ordering(552) 00:12:46.991 fused_ordering(553) 00:12:46.991 fused_ordering(554) 00:12:46.991 fused_ordering(555) 00:12:46.991 fused_ordering(556) 00:12:46.991 fused_ordering(557) 00:12:46.991 fused_ordering(558) 00:12:46.991 fused_ordering(559) 00:12:46.991 fused_ordering(560) 00:12:46.991 fused_ordering(561) 00:12:46.991 fused_ordering(562) 00:12:46.991 fused_ordering(563) 00:12:46.991 fused_ordering(564) 00:12:46.991 fused_ordering(565) 00:12:46.991 fused_ordering(566) 00:12:46.991 fused_ordering(567) 00:12:46.991 fused_ordering(568) 00:12:46.991 fused_ordering(569) 00:12:46.991 fused_ordering(570) 00:12:46.991 fused_ordering(571) 00:12:46.991 fused_ordering(572) 00:12:46.991 fused_ordering(573) 00:12:46.991 fused_ordering(574) 00:12:46.991 fused_ordering(575) 00:12:46.991 fused_ordering(576) 00:12:46.991 fused_ordering(577) 00:12:46.991 fused_ordering(578) 00:12:46.991 fused_ordering(579) 00:12:46.991 fused_ordering(580) 00:12:46.991 fused_ordering(581) 00:12:46.991 fused_ordering(582) 00:12:46.991 fused_ordering(583) 00:12:46.991 fused_ordering(584) 00:12:46.991 fused_ordering(585) 00:12:46.991 fused_ordering(586) 00:12:46.991 fused_ordering(587) 00:12:46.991 fused_ordering(588) 00:12:46.991 fused_ordering(589) 00:12:46.991 fused_ordering(590) 00:12:46.991 fused_ordering(591) 00:12:46.991 fused_ordering(592) 00:12:46.991 fused_ordering(593) 00:12:46.991 fused_ordering(594) 00:12:46.991 fused_ordering(595) 00:12:46.991 fused_ordering(596) 00:12:46.991 fused_ordering(597) 00:12:46.991 fused_ordering(598) 00:12:46.991 fused_ordering(599) 00:12:46.991 fused_ordering(600) 00:12:46.991 fused_ordering(601) 00:12:46.991 fused_ordering(602) 00:12:46.991 fused_ordering(603) 00:12:46.991 fused_ordering(604) 00:12:46.991 fused_ordering(605) 00:12:46.991 fused_ordering(606) 00:12:46.991 fused_ordering(607) 00:12:46.991 fused_ordering(608) 00:12:46.991 fused_ordering(609) 00:12:46.991 fused_ordering(610) 00:12:46.991 fused_ordering(611) 00:12:46.991 fused_ordering(612) 00:12:46.991 fused_ordering(613) 00:12:46.991 fused_ordering(614) 00:12:46.991 fused_ordering(615) 00:12:47.560 fused_ordering(616) 00:12:47.560 fused_ordering(617) 00:12:47.560 fused_ordering(618) 00:12:47.560 fused_ordering(619) 00:12:47.560 fused_ordering(620) 00:12:47.560 fused_ordering(621) 00:12:47.560 fused_ordering(622) 00:12:47.560 fused_ordering(623) 00:12:47.560 fused_ordering(624) 00:12:47.560 fused_ordering(625) 00:12:47.560 fused_ordering(626) 00:12:47.560 fused_ordering(627) 00:12:47.560 fused_ordering(628) 00:12:47.560 fused_ordering(629) 00:12:47.560 fused_ordering(630) 00:12:47.560 fused_ordering(631) 00:12:47.560 fused_ordering(632) 00:12:47.560 fused_ordering(633) 00:12:47.560 fused_ordering(634) 00:12:47.560 fused_ordering(635) 00:12:47.560 fused_ordering(636) 00:12:47.560 fused_ordering(637) 00:12:47.560 fused_ordering(638) 00:12:47.560 fused_ordering(639) 00:12:47.560 fused_ordering(640) 00:12:47.560 fused_ordering(641) 00:12:47.560 fused_ordering(642) 00:12:47.560 fused_ordering(643) 00:12:47.560 fused_ordering(644) 00:12:47.560 fused_ordering(645) 00:12:47.560 fused_ordering(646) 00:12:47.560 fused_ordering(647) 00:12:47.560 fused_ordering(648) 00:12:47.560 fused_ordering(649) 00:12:47.560 fused_ordering(650) 00:12:47.560 fused_ordering(651) 00:12:47.560 fused_ordering(652) 00:12:47.560 fused_ordering(653) 00:12:47.560 fused_ordering(654) 00:12:47.560 fused_ordering(655) 00:12:47.560 fused_ordering(656) 00:12:47.560 fused_ordering(657) 00:12:47.560 fused_ordering(658) 00:12:47.560 fused_ordering(659) 00:12:47.560 fused_ordering(660) 00:12:47.560 fused_ordering(661) 00:12:47.560 fused_ordering(662) 00:12:47.560 fused_ordering(663) 00:12:47.560 fused_ordering(664) 00:12:47.560 fused_ordering(665) 00:12:47.560 fused_ordering(666) 00:12:47.560 fused_ordering(667) 00:12:47.560 fused_ordering(668) 00:12:47.560 fused_ordering(669) 00:12:47.560 fused_ordering(670) 00:12:47.560 fused_ordering(671) 00:12:47.560 fused_ordering(672) 00:12:47.560 fused_ordering(673) 00:12:47.560 fused_ordering(674) 00:12:47.560 fused_ordering(675) 00:12:47.560 fused_ordering(676) 00:12:47.561 fused_ordering(677) 00:12:47.561 fused_ordering(678) 00:12:47.561 fused_ordering(679) 00:12:47.561 fused_ordering(680) 00:12:47.561 fused_ordering(681) 00:12:47.561 fused_ordering(682) 00:12:47.561 fused_ordering(683) 00:12:47.561 fused_ordering(684) 00:12:47.561 fused_ordering(685) 00:12:47.561 fused_ordering(686) 00:12:47.561 fused_ordering(687) 00:12:47.561 fused_ordering(688) 00:12:47.561 fused_ordering(689) 00:12:47.561 fused_ordering(690) 00:12:47.561 fused_ordering(691) 00:12:47.561 fused_ordering(692) 00:12:47.561 fused_ordering(693) 00:12:47.561 fused_ordering(694) 00:12:47.561 fused_ordering(695) 00:12:47.561 fused_ordering(696) 00:12:47.561 fused_ordering(697) 00:12:47.561 fused_ordering(698) 00:12:47.561 fused_ordering(699) 00:12:47.561 fused_ordering(700) 00:12:47.561 fused_ordering(701) 00:12:47.561 fused_ordering(702) 00:12:47.561 fused_ordering(703) 00:12:47.561 fused_ordering(704) 00:12:47.561 fused_ordering(705) 00:12:47.561 fused_ordering(706) 00:12:47.561 fused_ordering(707) 00:12:47.561 fused_ordering(708) 00:12:47.561 fused_ordering(709) 00:12:47.561 fused_ordering(710) 00:12:47.561 fused_ordering(711) 00:12:47.561 fused_ordering(712) 00:12:47.561 fused_ordering(713) 00:12:47.561 fused_ordering(714) 00:12:47.561 fused_ordering(715) 00:12:47.561 fused_ordering(716) 00:12:47.561 fused_ordering(717) 00:12:47.561 fused_ordering(718) 00:12:47.561 fused_ordering(719) 00:12:47.561 fused_ordering(720) 00:12:47.561 fused_ordering(721) 00:12:47.561 fused_ordering(722) 00:12:47.561 fused_ordering(723) 00:12:47.561 fused_ordering(724) 00:12:47.561 fused_ordering(725) 00:12:47.561 fused_ordering(726) 00:12:47.561 fused_ordering(727) 00:12:47.561 fused_ordering(728) 00:12:47.561 fused_ordering(729) 00:12:47.561 fused_ordering(730) 00:12:47.561 fused_ordering(731) 00:12:47.561 fused_ordering(732) 00:12:47.561 fused_ordering(733) 00:12:47.561 fused_ordering(734) 00:12:47.561 fused_ordering(735) 00:12:47.561 fused_ordering(736) 00:12:47.561 fused_ordering(737) 00:12:47.561 fused_ordering(738) 00:12:47.561 fused_ordering(739) 00:12:47.561 fused_ordering(740) 00:12:47.561 fused_ordering(741) 00:12:47.561 fused_ordering(742) 00:12:47.561 fused_ordering(743) 00:12:47.561 fused_ordering(744) 00:12:47.561 fused_ordering(745) 00:12:47.561 fused_ordering(746) 00:12:47.561 fused_ordering(747) 00:12:47.561 fused_ordering(748) 00:12:47.561 fused_ordering(749) 00:12:47.561 fused_ordering(750) 00:12:47.561 fused_ordering(751) 00:12:47.561 fused_ordering(752) 00:12:47.561 fused_ordering(753) 00:12:47.561 fused_ordering(754) 00:12:47.561 fused_ordering(755) 00:12:47.561 fused_ordering(756) 00:12:47.561 fused_ordering(757) 00:12:47.561 fused_ordering(758) 00:12:47.561 fused_ordering(759) 00:12:47.561 fused_ordering(760) 00:12:47.561 fused_ordering(761) 00:12:47.561 fused_ordering(762) 00:12:47.561 fused_ordering(763) 00:12:47.561 fused_ordering(764) 00:12:47.561 fused_ordering(765) 00:12:47.561 fused_ordering(766) 00:12:47.561 fused_ordering(767) 00:12:47.561 fused_ordering(768) 00:12:47.561 fused_ordering(769) 00:12:47.561 fused_ordering(770) 00:12:47.561 fused_ordering(771) 00:12:47.561 fused_ordering(772) 00:12:47.561 fused_ordering(773) 00:12:47.561 fused_ordering(774) 00:12:47.561 fused_ordering(775) 00:12:47.561 fused_ordering(776) 00:12:47.561 fused_ordering(777) 00:12:47.561 fused_ordering(778) 00:12:47.561 fused_ordering(779) 00:12:47.561 fused_ordering(780) 00:12:47.561 fused_ordering(781) 00:12:47.561 fused_ordering(782) 00:12:47.561 fused_ordering(783) 00:12:47.561 fused_ordering(784) 00:12:47.561 fused_ordering(785) 00:12:47.561 fused_ordering(786) 00:12:47.561 fused_ordering(787) 00:12:47.561 fused_ordering(788) 00:12:47.561 fused_ordering(789) 00:12:47.561 fused_ordering(790) 00:12:47.561 fused_ordering(791) 00:12:47.561 fused_ordering(792) 00:12:47.561 fused_ordering(793) 00:12:47.561 fused_ordering(794) 00:12:47.561 fused_ordering(795) 00:12:47.561 fused_ordering(796) 00:12:47.561 fused_ordering(797) 00:12:47.561 fused_ordering(798) 00:12:47.561 fused_ordering(799) 00:12:47.561 fused_ordering(800) 00:12:47.561 fused_ordering(801) 00:12:47.561 fused_ordering(802) 00:12:47.561 fused_ordering(803) 00:12:47.561 fused_ordering(804) 00:12:47.561 fused_ordering(805) 00:12:47.561 fused_ordering(806) 00:12:47.561 fused_ordering(807) 00:12:47.561 fused_ordering(808) 00:12:47.561 fused_ordering(809) 00:12:47.561 fused_ordering(810) 00:12:47.561 fused_ordering(811) 00:12:47.561 fused_ordering(812) 00:12:47.561 fused_ordering(813) 00:12:47.561 fused_ordering(814) 00:12:47.561 fused_ordering(815) 00:12:47.561 fused_ordering(816) 00:12:47.561 fused_ordering(817) 00:12:47.561 fused_ordering(818) 00:12:47.561 fused_ordering(819) 00:12:47.561 fused_ordering(820) 00:12:48.497 fused_ordering(821) 00:12:48.497 fused_ordering(822) 00:12:48.497 fused_ordering(823) 00:12:48.497 fused_ordering(824) 00:12:48.497 fused_ordering(825) 00:12:48.497 fused_ordering(826) 00:12:48.497 fused_ordering(827) 00:12:48.497 fused_ordering(828) 00:12:48.497 fused_ordering(829) 00:12:48.497 fused_ordering(830) 00:12:48.497 fused_ordering(831) 00:12:48.497 fused_ordering(832) 00:12:48.497 fused_ordering(833) 00:12:48.497 fused_ordering(834) 00:12:48.497 fused_ordering(835) 00:12:48.497 fused_ordering(836) 00:12:48.497 fused_ordering(837) 00:12:48.497 fused_ordering(838) 00:12:48.497 fused_ordering(839) 00:12:48.497 fused_ordering(840) 00:12:48.497 fused_ordering(841) 00:12:48.497 fused_ordering(842) 00:12:48.497 fused_ordering(843) 00:12:48.497 fused_ordering(844) 00:12:48.497 fused_ordering(845) 00:12:48.497 fused_ordering(846) 00:12:48.497 fused_ordering(847) 00:12:48.497 fused_ordering(848) 00:12:48.497 fused_ordering(849) 00:12:48.497 fused_ordering(850) 00:12:48.497 fused_ordering(851) 00:12:48.497 fused_ordering(852) 00:12:48.497 fused_ordering(853) 00:12:48.497 fused_ordering(854) 00:12:48.497 fused_ordering(855) 00:12:48.497 fused_ordering(856) 00:12:48.497 fused_ordering(857) 00:12:48.497 fused_ordering(858) 00:12:48.497 fused_ordering(859) 00:12:48.497 fused_ordering(860) 00:12:48.497 fused_ordering(861) 00:12:48.497 fused_ordering(862) 00:12:48.497 fused_ordering(863) 00:12:48.497 fused_ordering(864) 00:12:48.497 fused_ordering(865) 00:12:48.497 fused_ordering(866) 00:12:48.497 fused_ordering(867) 00:12:48.497 fused_ordering(868) 00:12:48.497 fused_ordering(869) 00:12:48.497 fused_ordering(870) 00:12:48.497 fused_ordering(871) 00:12:48.497 fused_ordering(872) 00:12:48.497 fused_ordering(873) 00:12:48.497 fused_ordering(874) 00:12:48.497 fused_ordering(875) 00:12:48.497 fused_ordering(876) 00:12:48.497 fused_ordering(877) 00:12:48.497 fused_ordering(878) 00:12:48.497 fused_ordering(879) 00:12:48.497 fused_ordering(880) 00:12:48.497 fused_ordering(881) 00:12:48.497 fused_ordering(882) 00:12:48.497 fused_ordering(883) 00:12:48.497 fused_ordering(884) 00:12:48.497 fused_ordering(885) 00:12:48.497 fused_ordering(886) 00:12:48.497 fused_ordering(887) 00:12:48.497 fused_ordering(888) 00:12:48.497 fused_ordering(889) 00:12:48.497 fused_ordering(890) 00:12:48.497 fused_ordering(891) 00:12:48.497 fused_ordering(892) 00:12:48.497 fused_ordering(893) 00:12:48.497 fused_ordering(894) 00:12:48.497 fused_ordering(895) 00:12:48.497 fused_ordering(896) 00:12:48.497 fused_ordering(897) 00:12:48.497 fused_ordering(898) 00:12:48.497 fused_ordering(899) 00:12:48.497 fused_ordering(900) 00:12:48.497 fused_ordering(901) 00:12:48.497 fused_ordering(902) 00:12:48.497 fused_ordering(903) 00:12:48.497 fused_ordering(904) 00:12:48.497 fused_ordering(905) 00:12:48.497 fused_ordering(906) 00:12:48.497 fused_ordering(907) 00:12:48.497 fused_ordering(908) 00:12:48.497 fused_ordering(909) 00:12:48.497 fused_ordering(910) 00:12:48.497 fused_ordering(911) 00:12:48.497 fused_ordering(912) 00:12:48.498 fused_ordering(913) 00:12:48.498 fused_ordering(914) 00:12:48.498 fused_ordering(915) 00:12:48.498 fused_ordering(916) 00:12:48.498 fused_ordering(917) 00:12:48.498 fused_ordering(918) 00:12:48.498 fused_ordering(919) 00:12:48.498 fused_ordering(920) 00:12:48.498 fused_ordering(921) 00:12:48.498 fused_ordering(922) 00:12:48.498 fused_ordering(923) 00:12:48.498 fused_ordering(924) 00:12:48.498 fused_ordering(925) 00:12:48.498 fused_ordering(926) 00:12:48.498 fused_ordering(927) 00:12:48.498 fused_ordering(928) 00:12:48.498 fused_ordering(929) 00:12:48.498 fused_ordering(930) 00:12:48.498 fused_ordering(931) 00:12:48.498 fused_ordering(932) 00:12:48.498 fused_ordering(933) 00:12:48.498 fused_ordering(934) 00:12:48.498 fused_ordering(935) 00:12:48.498 fused_ordering(936) 00:12:48.498 fused_ordering(937) 00:12:48.498 fused_ordering(938) 00:12:48.498 fused_ordering(939) 00:12:48.498 fused_ordering(940) 00:12:48.498 fused_ordering(941) 00:12:48.498 fused_ordering(942) 00:12:48.498 fused_ordering(943) 00:12:48.498 fused_ordering(944) 00:12:48.498 fused_ordering(945) 00:12:48.498 fused_ordering(946) 00:12:48.498 fused_ordering(947) 00:12:48.498 fused_ordering(948) 00:12:48.498 fused_ordering(949) 00:12:48.498 fused_ordering(950) 00:12:48.498 fused_ordering(951) 00:12:48.498 fused_ordering(952) 00:12:48.498 fused_ordering(953) 00:12:48.498 fused_ordering(954) 00:12:48.498 fused_ordering(955) 00:12:48.498 fused_ordering(956) 00:12:48.498 fused_ordering(957) 00:12:48.498 fused_ordering(958) 00:12:48.498 fused_ordering(959) 00:12:48.498 fused_ordering(960) 00:12:48.498 fused_ordering(961) 00:12:48.498 fused_ordering(962) 00:12:48.498 fused_ordering(963) 00:12:48.498 fused_ordering(964) 00:12:48.498 fused_ordering(965) 00:12:48.498 fused_ordering(966) 00:12:48.498 fused_ordering(967) 00:12:48.498 fused_ordering(968) 00:12:48.498 fused_ordering(969) 00:12:48.498 fused_ordering(970) 00:12:48.498 fused_ordering(971) 00:12:48.498 fused_ordering(972) 00:12:48.498 fused_ordering(973) 00:12:48.498 fused_ordering(974) 00:12:48.498 fused_ordering(975) 00:12:48.498 fused_ordering(976) 00:12:48.498 fused_ordering(977) 00:12:48.498 fused_ordering(978) 00:12:48.498 fused_ordering(979) 00:12:48.498 fused_ordering(980) 00:12:48.498 fused_ordering(981) 00:12:48.498 fused_ordering(982) 00:12:48.498 fused_ordering(983) 00:12:48.498 fused_ordering(984) 00:12:48.498 fused_ordering(985) 00:12:48.498 fused_ordering(986) 00:12:48.498 fused_ordering(987) 00:12:48.498 fused_ordering(988) 00:12:48.498 fused_ordering(989) 00:12:48.498 fused_ordering(990) 00:12:48.498 fused_ordering(991) 00:12:48.498 fused_ordering(992) 00:12:48.498 fused_ordering(993) 00:12:48.498 fused_ordering(994) 00:12:48.498 fused_ordering(995) 00:12:48.498 fused_ordering(996) 00:12:48.498 fused_ordering(997) 00:12:48.498 fused_ordering(998) 00:12:48.498 fused_ordering(999) 00:12:48.498 fused_ordering(1000) 00:12:48.498 fused_ordering(1001) 00:12:48.498 fused_ordering(1002) 00:12:48.498 fused_ordering(1003) 00:12:48.498 fused_ordering(1004) 00:12:48.498 fused_ordering(1005) 00:12:48.498 fused_ordering(1006) 00:12:48.498 fused_ordering(1007) 00:12:48.498 fused_ordering(1008) 00:12:48.498 fused_ordering(1009) 00:12:48.498 fused_ordering(1010) 00:12:48.498 fused_ordering(1011) 00:12:48.498 fused_ordering(1012) 00:12:48.498 fused_ordering(1013) 00:12:48.498 fused_ordering(1014) 00:12:48.498 fused_ordering(1015) 00:12:48.498 fused_ordering(1016) 00:12:48.498 fused_ordering(1017) 00:12:48.498 fused_ordering(1018) 00:12:48.498 fused_ordering(1019) 00:12:48.498 fused_ordering(1020) 00:12:48.498 fused_ordering(1021) 00:12:48.498 fused_ordering(1022) 00:12:48.498 fused_ordering(1023) 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.498 rmmod nvme_tcp 00:12:48.498 rmmod nvme_fabrics 00:12:48.498 rmmod nvme_keyring 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 2319041 ']' 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 2319041 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2319041 ']' 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2319041 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2319041 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2319041' 00:12:48.498 killing process with pid 2319041 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2319041 00:12:48.498 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2319041 00:12:48.755 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:48.755 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:48.755 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:48.755 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:48.755 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:12:48.755 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:48.755 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:12:48.755 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.755 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:48.755 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.755 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.755 16:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.661 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:50.661 00:12:50.661 real 0m7.535s 00:12:50.661 user 0m5.136s 00:12:50.661 sys 0m3.131s 00:12:50.661 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.661 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:50.661 ************************************ 00:12:50.661 END TEST nvmf_fused_ordering 00:12:50.661 ************************************ 00:12:50.661 16:41:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:50.661 16:41:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.661 16:41:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.661 16:41:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.661 ************************************ 00:12:50.661 START TEST nvmf_ns_masking 00:12:50.661 ************************************ 00:12:50.661 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:50.661 * Looking for test storage... 00:12:50.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.661 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:50.661 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:12:50.661 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:50.920 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:50.920 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.920 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:50.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.921 --rc genhtml_branch_coverage=1 00:12:50.921 --rc genhtml_function_coverage=1 00:12:50.921 --rc genhtml_legend=1 00:12:50.921 --rc geninfo_all_blocks=1 00:12:50.921 --rc geninfo_unexecuted_blocks=1 00:12:50.921 00:12:50.921 ' 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:50.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.921 --rc genhtml_branch_coverage=1 00:12:50.921 --rc genhtml_function_coverage=1 00:12:50.921 --rc genhtml_legend=1 00:12:50.921 --rc geninfo_all_blocks=1 00:12:50.921 --rc geninfo_unexecuted_blocks=1 00:12:50.921 00:12:50.921 ' 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:50.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.921 --rc genhtml_branch_coverage=1 00:12:50.921 --rc genhtml_function_coverage=1 00:12:50.921 --rc genhtml_legend=1 00:12:50.921 --rc geninfo_all_blocks=1 00:12:50.921 --rc geninfo_unexecuted_blocks=1 00:12:50.921 00:12:50.921 ' 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:50.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.921 --rc genhtml_branch_coverage=1 00:12:50.921 --rc genhtml_function_coverage=1 00:12:50.921 --rc genhtml_legend=1 00:12:50.921 --rc geninfo_all_blocks=1 00:12:50.921 --rc geninfo_unexecuted_blocks=1 00:12:50.921 00:12:50.921 ' 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.921 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e2af5a83-13c2-4348-9884-24b9281dd1f4 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9524d39f-46cc-41c6-9581-185b24d5520a 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=eb312c60-f35b-4e5f-8946-fb2e9a8266ba 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.922 16:41:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:52.829 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:52.829 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:52.829 Found net devices under 0000:09:00.0: cvl_0_0 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:52.829 Found net devices under 0000:09:00.1: cvl_0_1 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.829 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.830 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.830 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.830 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.830 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.830 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.830 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.830 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:53.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:12:53.088 00:12:53.088 --- 10.0.0.2 ping statistics --- 00:12:53.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.088 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:12:53.088 00:12:53.088 --- 10.0.0.1 ping statistics --- 00:12:53.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.088 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=2321279 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 2321279 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2321279 ']' 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:53.088 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:53.088 [2024-10-17 16:41:06.699317] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:12:53.088 [2024-10-17 16:41:06.699404] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.088 [2024-10-17 16:41:06.763215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.347 [2024-10-17 16:41:06.825729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.347 [2024-10-17 16:41:06.825795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.347 [2024-10-17 16:41:06.825811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.347 [2024-10-17 16:41:06.825824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.347 [2024-10-17 16:41:06.825836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.347 [2024-10-17 16:41:06.826495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.347 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:53.347 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:12:53.347 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:53.347 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:53.347 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:53.347 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.347 16:41:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:53.607 [2024-10-17 16:41:07.271492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.607 16:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:53.607 16:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:53.607 16:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:54.174 Malloc1 00:12:54.174 16:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:54.435 Malloc2 00:12:54.435 16:41:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:54.723 16:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:55.000 16:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.264 [2024-10-17 16:41:08.807897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.264 16:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:55.264 16:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eb312c60-f35b-4e5f-8946-fb2e9a8266ba -a 10.0.0.2 -s 4420 -i 4 00:12:55.523 16:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.523 16:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.523 16:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.523 16:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:55.523 16:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:57.430 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:57.430 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:57.430 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.430 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:57.430 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.430 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:57.430 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:57.430 16:41:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:57.430 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:57.430 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:57.430 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:57.430 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.430 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:57.430 [ 0]:0x1 00:12:57.430 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:57.430 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.430 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=811f2c3e28044ec6a65fce4d815d0377 00:12:57.430 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 811f2c3e28044ec6a65fce4d815d0377 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.430 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.998 [ 0]:0x1 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=811f2c3e28044ec6a65fce4d815d0377 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 811f2c3e28044ec6a65fce4d815d0377 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:57.998 [ 1]:0x2 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48f1b31a1ee44f93b1aa47671d002830 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48f1b31a1ee44f93b1aa47671d002830 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:57.998 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.256 16:41:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.515 16:41:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:58.773 16:41:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:58.774 16:41:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eb312c60-f35b-4e5f-8946-fb2e9a8266ba -a 10.0.0.2 -s 4420 -i 4 00:12:59.032 16:41:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:59.032 16:41:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:59.032 16:41:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.032 16:41:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:59.032 16:41:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:59.032 16:41:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:00.940 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.198 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.199 [ 0]:0x2 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48f1b31a1ee44f93b1aa47671d002830 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48f1b31a1ee44f93b1aa47671d002830 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.199 16:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:01.456 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:01.456 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.456 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.456 [ 0]:0x1 00:13:01.456 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.456 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.715 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=811f2c3e28044ec6a65fce4d815d0377 00:13:01.715 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 811f2c3e28044ec6a65fce4d815d0377 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.715 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:01.715 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.715 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.715 [ 1]:0x2 00:13:01.715 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.715 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.715 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48f1b31a1ee44f93b1aa47671d002830 00:13:01.715 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48f1b31a1ee44f93b1aa47671d002830 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.715 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.973 [ 0]:0x2 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48f1b31a1ee44f93b1aa47671d002830 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48f1b31a1ee44f93b1aa47671d002830 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.973 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:02.233 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:02.233 16:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eb312c60-f35b-4e5f-8946-fb2e9a8266ba -a 10.0.0.2 -s 4420 -i 4 00:13:02.492 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:02.492 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:02.492 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.492 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:02.492 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:02.492 16:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:05.028 [ 0]:0x1 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=811f2c3e28044ec6a65fce4d815d0377 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 811f2c3e28044ec6a65fce4d815d0377 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:05.028 [ 1]:0x2 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48f1b31a1ee44f93b1aa47671d002830 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48f1b31a1ee44f93b1aa47671d002830 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:05.028 [ 0]:0x2 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.028 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48f1b31a1ee44f93b1aa47671d002830 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48f1b31a1ee44f93b1aa47671d002830 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:05.029 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:05.287 [2024-10-17 16:41:18.891133] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:05.287 request: 00:13:05.287 { 00:13:05.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:05.287 "nsid": 2, 00:13:05.287 "host": "nqn.2016-06.io.spdk:host1", 00:13:05.287 "method": "nvmf_ns_remove_host", 00:13:05.287 "req_id": 1 00:13:05.287 } 00:13:05.287 Got JSON-RPC error response 00:13:05.287 response: 00:13:05.287 { 00:13:05.287 "code": -32602, 00:13:05.287 "message": "Invalid parameters" 00:13:05.287 } 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:05.287 [ 0]:0x2 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:05.287 16:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48f1b31a1ee44f93b1aa47671d002830 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48f1b31a1ee44f93b1aa47671d002830 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2322903 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2322903 /var/tmp/host.sock 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2322903 ']' 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:05.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.545 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:05.545 [2024-10-17 16:41:19.106651] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:13:05.545 [2024-10-17 16:41:19.106736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322903 ] 00:13:05.545 [2024-10-17 16:41:19.169515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.545 [2024-10-17 16:41:19.234127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.113 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.113 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:06.113 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.371 16:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.629 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e2af5a83-13c2-4348-9884-24b9281dd1f4 00:13:06.629 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:13:06.629 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E2AF5A8313C24348988424B9281DD1F4 -i 00:13:06.886 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9524d39f-46cc-41c6-9581-185b24d5520a 00:13:06.887 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:13:06.887 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9524D39F46CC41C69581185B24D5520A -i 00:13:07.145 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:07.403 16:41:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:07.661 16:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:07.661 16:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:07.919 nvme0n1 00:13:07.919 16:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:07.920 16:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:08.486 nvme1n2 00:13:08.486 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:08.486 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:08.486 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:08.486 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:08.486 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:08.744 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:08.744 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:08.744 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:08.744 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:09.002 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e2af5a83-13c2-4348-9884-24b9281dd1f4 == \e\2\a\f\5\a\8\3\-\1\3\c\2\-\4\3\4\8\-\9\8\8\4\-\2\4\b\9\2\8\1\d\d\1\f\4 ]] 00:13:09.002 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:09.002 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:09.002 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:09.261 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9524d39f-46cc-41c6-9581-185b24d5520a == \9\5\2\4\d\3\9\f\-\4\6\c\c\-\4\1\c\6\-\9\5\8\1\-\1\8\5\b\2\4\d\5\5\2\0\a ]] 00:13:09.261 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2322903 00:13:09.261 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2322903 ']' 00:13:09.261 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2322903 00:13:09.261 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:09.261 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.261 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2322903 00:13:09.261 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:09.261 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:09.261 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2322903' 00:13:09.261 killing process with pid 2322903 00:13:09.261 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2322903 00:13:09.261 16:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2322903 00:13:09.830 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.090 rmmod nvme_tcp 00:13:10.090 rmmod nvme_fabrics 00:13:10.090 rmmod nvme_keyring 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 2321279 ']' 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 2321279 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2321279 ']' 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2321279 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2321279 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2321279' 00:13:10.090 killing process with pid 2321279 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2321279 00:13:10.090 16:41:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2321279 00:13:10.349 16:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:10.349 16:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:10.349 16:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:10.349 16:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:10.349 16:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:13:10.349 16:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:10.349 16:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:13:10.349 16:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:10.349 16:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:10.349 16:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.349 16:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.349 16:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:12.886 00:13:12.886 real 0m21.791s 00:13:12.886 user 0m29.067s 00:13:12.886 sys 0m4.182s 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:12.886 ************************************ 00:13:12.886 END TEST nvmf_ns_masking 00:13:12.886 ************************************ 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:12.886 ************************************ 00:13:12.886 START TEST nvmf_nvme_cli 00:13:12.886 ************************************ 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:12.886 * Looking for test storage... 00:13:12.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:12.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.886 --rc genhtml_branch_coverage=1 00:13:12.886 --rc genhtml_function_coverage=1 00:13:12.886 --rc genhtml_legend=1 00:13:12.886 --rc geninfo_all_blocks=1 00:13:12.886 --rc geninfo_unexecuted_blocks=1 00:13:12.886 00:13:12.886 ' 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:12.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.886 --rc genhtml_branch_coverage=1 00:13:12.886 --rc genhtml_function_coverage=1 00:13:12.886 --rc genhtml_legend=1 00:13:12.886 --rc geninfo_all_blocks=1 00:13:12.886 --rc geninfo_unexecuted_blocks=1 00:13:12.886 00:13:12.886 ' 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:12.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.886 --rc genhtml_branch_coverage=1 00:13:12.886 --rc genhtml_function_coverage=1 00:13:12.886 --rc genhtml_legend=1 00:13:12.886 --rc geninfo_all_blocks=1 00:13:12.886 --rc geninfo_unexecuted_blocks=1 00:13:12.886 00:13:12.886 ' 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:12.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.886 --rc genhtml_branch_coverage=1 00:13:12.886 --rc genhtml_function_coverage=1 00:13:12.886 --rc genhtml_legend=1 00:13:12.886 --rc geninfo_all_blocks=1 00:13:12.886 --rc geninfo_unexecuted_blocks=1 00:13:12.886 00:13:12.886 ' 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.886 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:12.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:12.887 16:41:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:14.796 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:14.796 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.796 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:14.796 Found net devices under 0000:09:00.0: cvl_0_0 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:14.797 Found net devices under 0000:09:00.1: cvl_0_1 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.797 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.055 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.055 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.055 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.055 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.055 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:13:15.055 00:13:15.055 --- 10.0.0.2 ping statistics --- 00:13:15.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.055 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:13:15.055 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:13:15.055 00:13:15.055 --- 10.0.0.1 ping statistics --- 00:13:15.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.055 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:15.055 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.055 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:13:15.055 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:15.055 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.055 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=2325413 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 2325413 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2325413 ']' 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:15.056 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.056 [2024-10-17 16:41:28.616585] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:13:15.056 [2024-10-17 16:41:28.616670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.056 [2024-10-17 16:41:28.692327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.316 [2024-10-17 16:41:28.762385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.316 [2024-10-17 16:41:28.762447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.316 [2024-10-17 16:41:28.762464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.316 [2024-10-17 16:41:28.762477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.316 [2024-10-17 16:41:28.762488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.316 [2024-10-17 16:41:28.764179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.316 [2024-10-17 16:41:28.764208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.316 [2024-10-17 16:41:28.764235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.316 [2024-10-17 16:41:28.764240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.316 [2024-10-17 16:41:28.916666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.316 Malloc0 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.316 Malloc1 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.316 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.316 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.316 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:15.316 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.316 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.575 [2024-10-17 16:41:29.022939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:13:15.575 00:13:15.575 Discovery Log Number of Records 2, Generation counter 2 00:13:15.575 =====Discovery Log Entry 0====== 00:13:15.575 trtype: tcp 00:13:15.575 adrfam: ipv4 00:13:15.575 subtype: current discovery subsystem 00:13:15.575 treq: not required 00:13:15.575 portid: 0 00:13:15.575 trsvcid: 4420 00:13:15.575 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:15.575 traddr: 10.0.0.2 00:13:15.575 eflags: explicit discovery connections, duplicate discovery information 00:13:15.575 sectype: none 00:13:15.575 =====Discovery Log Entry 1====== 00:13:15.575 trtype: tcp 00:13:15.575 adrfam: ipv4 00:13:15.575 subtype: nvme subsystem 00:13:15.575 treq: not required 00:13:15.575 portid: 0 00:13:15.575 trsvcid: 4420 00:13:15.575 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:15.575 traddr: 10.0.0.2 00:13:15.575 eflags: none 00:13:15.575 sectype: none 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:15.575 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.144 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:16.144 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:16.144 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.144 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:16.144 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:16.144 16:41:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:18.681 /dev/nvme0n2 ]] 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:18.681 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:18.682 16:41:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:18.682 rmmod nvme_tcp 00:13:18.682 rmmod nvme_fabrics 00:13:18.682 rmmod nvme_keyring 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 2325413 ']' 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 2325413 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2325413 ']' 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2325413 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2325413 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2325413' 00:13:18.682 killing process with pid 2325413 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2325413 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2325413 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.682 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:21.227 00:13:21.227 real 0m8.278s 00:13:21.227 user 0m15.025s 00:13:21.227 sys 0m2.236s 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.227 ************************************ 00:13:21.227 END TEST nvmf_nvme_cli 00:13:21.227 ************************************ 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:21.227 ************************************ 00:13:21.227 START TEST nvmf_vfio_user 00:13:21.227 ************************************ 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:21.227 * Looking for test storage... 00:13:21.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:21.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.227 --rc genhtml_branch_coverage=1 00:13:21.227 --rc genhtml_function_coverage=1 00:13:21.227 --rc genhtml_legend=1 00:13:21.227 --rc geninfo_all_blocks=1 00:13:21.227 --rc geninfo_unexecuted_blocks=1 00:13:21.227 00:13:21.227 ' 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:21.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.227 --rc genhtml_branch_coverage=1 00:13:21.227 --rc genhtml_function_coverage=1 00:13:21.227 --rc genhtml_legend=1 00:13:21.227 --rc geninfo_all_blocks=1 00:13:21.227 --rc geninfo_unexecuted_blocks=1 00:13:21.227 00:13:21.227 ' 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:21.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.227 --rc genhtml_branch_coverage=1 00:13:21.227 --rc genhtml_function_coverage=1 00:13:21.227 --rc genhtml_legend=1 00:13:21.227 --rc geninfo_all_blocks=1 00:13:21.227 --rc geninfo_unexecuted_blocks=1 00:13:21.227 00:13:21.227 ' 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:21.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.227 --rc genhtml_branch_coverage=1 00:13:21.227 --rc genhtml_function_coverage=1 00:13:21.227 --rc genhtml_legend=1 00:13:21.227 --rc geninfo_all_blocks=1 00:13:21.227 --rc geninfo_unexecuted_blocks=1 00:13:21.227 00:13:21.227 ' 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.227 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2326336 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2326336' 00:13:21.228 Process pid: 2326336 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2326336 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2326336 ']' 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.228 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:21.228 [2024-10-17 16:41:34.686261] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:13:21.228 [2024-10-17 16:41:34.686355] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.228 [2024-10-17 16:41:34.749879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.228 [2024-10-17 16:41:34.814510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.228 [2024-10-17 16:41:34.814571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.228 [2024-10-17 16:41:34.814587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.228 [2024-10-17 16:41:34.814600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.228 [2024-10-17 16:41:34.814618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.228 [2024-10-17 16:41:34.816355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.228 [2024-10-17 16:41:34.816409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.228 [2024-10-17 16:41:34.816522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.228 [2024-10-17 16:41:34.816525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.487 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.487 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:13:21.487 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:22.423 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:22.681 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:22.681 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:22.681 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:22.681 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:22.681 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:22.940 Malloc1 00:13:22.940 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:23.198 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:23.456 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:23.716 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:23.716 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:23.976 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:24.234 Malloc2 00:13:24.234 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:24.492 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:24.750 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:25.010 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:25.010 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:25.010 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:25.010 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:25.010 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:25.010 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:25.010 [2024-10-17 16:41:38.535455] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:13:25.010 [2024-10-17 16:41:38.535498] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2326766 ] 00:13:25.010 [2024-10-17 16:41:38.568385] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:25.010 [2024-10-17 16:41:38.577478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:25.010 [2024-10-17 16:41:38.577512] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb0df777000 00:13:25.010 [2024-10-17 16:41:38.578471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:25.010 [2024-10-17 16:41:38.579470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:25.010 [2024-10-17 16:41:38.580473] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:25.010 [2024-10-17 16:41:38.581481] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:25.010 [2024-10-17 16:41:38.582484] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:25.010 [2024-10-17 16:41:38.583489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:25.010 [2024-10-17 16:41:38.584499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:25.010 [2024-10-17 16:41:38.585499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:25.010 [2024-10-17 16:41:38.586508] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:25.010 [2024-10-17 16:41:38.586528] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb0df76c000 00:13:25.010 [2024-10-17 16:41:38.587648] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:25.010 [2024-10-17 16:41:38.602671] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:25.010 [2024-10-17 16:41:38.602709] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:25.010 [2024-10-17 16:41:38.607627] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:25.010 [2024-10-17 16:41:38.607686] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:25.010 [2024-10-17 16:41:38.607794] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:25.010 [2024-10-17 16:41:38.607831] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:25.010 [2024-10-17 16:41:38.607842] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:25.010 [2024-10-17 16:41:38.608618] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:25.010 [2024-10-17 16:41:38.608645] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:25.010 [2024-10-17 16:41:38.608658] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:25.010 [2024-10-17 16:41:38.609621] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:25.010 [2024-10-17 16:41:38.609641] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:25.010 [2024-10-17 16:41:38.609653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:25.010 [2024-10-17 16:41:38.610625] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:25.010 [2024-10-17 16:41:38.610643] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:25.010 [2024-10-17 16:41:38.611631] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:25.010 [2024-10-17 16:41:38.611651] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:25.010 [2024-10-17 16:41:38.611660] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:25.010 [2024-10-17 16:41:38.611671] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:25.010 [2024-10-17 16:41:38.611781] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:25.010 [2024-10-17 16:41:38.611789] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:25.010 [2024-10-17 16:41:38.611798] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:25.010 [2024-10-17 16:41:38.612636] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:25.010 [2024-10-17 16:41:38.613640] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:25.010 [2024-10-17 16:41:38.614646] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:25.010 [2024-10-17 16:41:38.615636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:25.010 [2024-10-17 16:41:38.615744] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:25.010 [2024-10-17 16:41:38.616658] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:25.010 [2024-10-17 16:41:38.616675] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:25.010 [2024-10-17 16:41:38.616684] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.616708] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:25.011 [2024-10-17 16:41:38.616726] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.616763] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:25.011 [2024-10-17 16:41:38.616773] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:25.011 [2024-10-17 16:41:38.616780] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:25.011 [2024-10-17 16:41:38.616801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.616852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:25.011 [2024-10-17 16:41:38.616871] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:25.011 [2024-10-17 16:41:38.616879] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:25.011 [2024-10-17 16:41:38.616886] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:25.011 [2024-10-17 16:41:38.616893] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:25.011 [2024-10-17 16:41:38.616901] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:25.011 [2024-10-17 16:41:38.616908] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:25.011 [2024-10-17 16:41:38.616916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.616935] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.616954] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.616991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:25.011 [2024-10-17 16:41:38.617016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.011 [2024-10-17 16:41:38.617030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.011 [2024-10-17 16:41:38.617057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.011 [2024-10-17 16:41:38.617070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.011 [2024-10-17 16:41:38.617079] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617096] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.617124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:25.011 [2024-10-17 16:41:38.617135] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:25.011 [2024-10-17 16:41:38.617144] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617170] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.617196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:25.011 [2024-10-17 16:41:38.617265] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617286] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617301] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:25.011 [2024-10-17 16:41:38.617310] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:25.011 [2024-10-17 16:41:38.617315] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:25.011 [2024-10-17 16:41:38.617325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.617340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:25.011 [2024-10-17 16:41:38.617378] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:25.011 [2024-10-17 16:41:38.617395] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617409] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617421] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:25.011 [2024-10-17 16:41:38.617429] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:25.011 [2024-10-17 16:41:38.617434] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:25.011 [2024-10-17 16:41:38.617443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.617469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:25.011 [2024-10-17 16:41:38.617488] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617501] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617512] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:25.011 [2024-10-17 16:41:38.617520] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:25.011 [2024-10-17 16:41:38.617525] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:25.011 [2024-10-17 16:41:38.617534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.617548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:25.011 [2024-10-17 16:41:38.617567] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617583] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617624] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617633] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:25.011 [2024-10-17 16:41:38.617640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:25.011 [2024-10-17 16:41:38.617648] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:25.011 [2024-10-17 16:41:38.617676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.617690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:25.011 [2024-10-17 16:41:38.617707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.617719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:25.011 [2024-10-17 16:41:38.617734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.617748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:25.011 [2024-10-17 16:41:38.617763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.617777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:25.011 [2024-10-17 16:41:38.617801] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:25.011 [2024-10-17 16:41:38.617810] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:25.011 [2024-10-17 16:41:38.617816] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:25.011 [2024-10-17 16:41:38.617822] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:25.011 [2024-10-17 16:41:38.617827] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:25.011 [2024-10-17 16:41:38.617836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:25.011 [2024-10-17 16:41:38.617848] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:25.011 [2024-10-17 16:41:38.617855] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:25.011 [2024-10-17 16:41:38.617861] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:25.011 [2024-10-17 16:41:38.617869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.617880] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:25.011 [2024-10-17 16:41:38.617890] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:25.011 [2024-10-17 16:41:38.617896] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:25.011 [2024-10-17 16:41:38.617905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:25.011 [2024-10-17 16:41:38.617916] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:25.011 [2024-10-17 16:41:38.617924] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:25.011 [2024-10-17 16:41:38.617929] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:25.012 [2024-10-17 16:41:38.617938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:25.012 [2024-10-17 16:41:38.617949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:25.012 [2024-10-17 16:41:38.617968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:25.012 [2024-10-17 16:41:38.618012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:25.012 [2024-10-17 16:41:38.618027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:25.012 ===================================================== 00:13:25.012 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:25.012 ===================================================== 00:13:25.012 Controller Capabilities/Features 00:13:25.012 ================================ 00:13:25.012 Vendor ID: 4e58 00:13:25.012 Subsystem Vendor ID: 4e58 00:13:25.012 Serial Number: SPDK1 00:13:25.012 Model Number: SPDK bdev Controller 00:13:25.012 Firmware Version: 25.01 00:13:25.012 Recommended Arb Burst: 6 00:13:25.012 IEEE OUI Identifier: 8d 6b 50 00:13:25.012 Multi-path I/O 00:13:25.012 May have multiple subsystem ports: Yes 00:13:25.012 May have multiple controllers: Yes 00:13:25.012 Associated with SR-IOV VF: No 00:13:25.012 Max Data Transfer Size: 131072 00:13:25.012 Max Number of Namespaces: 32 00:13:25.012 Max Number of I/O Queues: 127 00:13:25.012 NVMe Specification Version (VS): 1.3 00:13:25.012 NVMe Specification Version (Identify): 1.3 00:13:25.012 Maximum Queue Entries: 256 00:13:25.012 Contiguous Queues Required: Yes 00:13:25.012 Arbitration Mechanisms Supported 00:13:25.012 Weighted Round Robin: Not Supported 00:13:25.012 Vendor Specific: Not Supported 00:13:25.012 Reset Timeout: 15000 ms 00:13:25.012 Doorbell Stride: 4 bytes 00:13:25.012 NVM Subsystem Reset: Not Supported 00:13:25.012 Command Sets Supported 00:13:25.012 NVM Command Set: Supported 00:13:25.012 Boot Partition: Not Supported 00:13:25.012 Memory Page Size Minimum: 4096 bytes 00:13:25.012 Memory Page Size Maximum: 4096 bytes 00:13:25.012 Persistent Memory Region: Not Supported 00:13:25.012 Optional Asynchronous Events Supported 00:13:25.012 Namespace Attribute Notices: Supported 00:13:25.012 Firmware Activation Notices: Not Supported 00:13:25.012 ANA Change Notices: Not Supported 00:13:25.012 PLE Aggregate Log Change Notices: Not Supported 00:13:25.012 LBA Status Info Alert Notices: Not Supported 00:13:25.012 EGE Aggregate Log Change Notices: Not Supported 00:13:25.012 Normal NVM Subsystem Shutdown event: Not Supported 00:13:25.012 Zone Descriptor Change Notices: Not Supported 00:13:25.012 Discovery Log Change Notices: Not Supported 00:13:25.012 Controller Attributes 00:13:25.012 128-bit Host Identifier: Supported 00:13:25.012 Non-Operational Permissive Mode: Not Supported 00:13:25.012 NVM Sets: Not Supported 00:13:25.012 Read Recovery Levels: Not Supported 00:13:25.012 Endurance Groups: Not Supported 00:13:25.012 Predictable Latency Mode: Not Supported 00:13:25.012 Traffic Based Keep ALive: Not Supported 00:13:25.012 Namespace Granularity: Not Supported 00:13:25.012 SQ Associations: Not Supported 00:13:25.012 UUID List: Not Supported 00:13:25.012 Multi-Domain Subsystem: Not Supported 00:13:25.012 Fixed Capacity Management: Not Supported 00:13:25.012 Variable Capacity Management: Not Supported 00:13:25.012 Delete Endurance Group: Not Supported 00:13:25.012 Delete NVM Set: Not Supported 00:13:25.012 Extended LBA Formats Supported: Not Supported 00:13:25.012 Flexible Data Placement Supported: Not Supported 00:13:25.012 00:13:25.012 Controller Memory Buffer Support 00:13:25.012 ================================ 00:13:25.012 Supported: No 00:13:25.012 00:13:25.012 Persistent Memory Region Support 00:13:25.012 ================================ 00:13:25.012 Supported: No 00:13:25.012 00:13:25.012 Admin Command Set Attributes 00:13:25.012 ============================ 00:13:25.012 Security Send/Receive: Not Supported 00:13:25.012 Format NVM: Not Supported 00:13:25.012 Firmware Activate/Download: Not Supported 00:13:25.012 Namespace Management: Not Supported 00:13:25.012 Device Self-Test: Not Supported 00:13:25.012 Directives: Not Supported 00:13:25.012 NVMe-MI: Not Supported 00:13:25.012 Virtualization Management: Not Supported 00:13:25.012 Doorbell Buffer Config: Not Supported 00:13:25.012 Get LBA Status Capability: Not Supported 00:13:25.012 Command & Feature Lockdown Capability: Not Supported 00:13:25.012 Abort Command Limit: 4 00:13:25.012 Async Event Request Limit: 4 00:13:25.012 Number of Firmware Slots: N/A 00:13:25.012 Firmware Slot 1 Read-Only: N/A 00:13:25.012 Firmware Activation Without Reset: N/A 00:13:25.012 Multiple Update Detection Support: N/A 00:13:25.012 Firmware Update Granularity: No Information Provided 00:13:25.012 Per-Namespace SMART Log: No 00:13:25.012 Asymmetric Namespace Access Log Page: Not Supported 00:13:25.012 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:25.012 Command Effects Log Page: Supported 00:13:25.012 Get Log Page Extended Data: Supported 00:13:25.012 Telemetry Log Pages: Not Supported 00:13:25.012 Persistent Event Log Pages: Not Supported 00:13:25.012 Supported Log Pages Log Page: May Support 00:13:25.012 Commands Supported & Effects Log Page: Not Supported 00:13:25.012 Feature Identifiers & Effects Log Page:May Support 00:13:25.012 NVMe-MI Commands & Effects Log Page: May Support 00:13:25.012 Data Area 4 for Telemetry Log: Not Supported 00:13:25.012 Error Log Page Entries Supported: 128 00:13:25.012 Keep Alive: Supported 00:13:25.012 Keep Alive Granularity: 10000 ms 00:13:25.012 00:13:25.012 NVM Command Set Attributes 00:13:25.012 ========================== 00:13:25.012 Submission Queue Entry Size 00:13:25.012 Max: 64 00:13:25.012 Min: 64 00:13:25.012 Completion Queue Entry Size 00:13:25.012 Max: 16 00:13:25.012 Min: 16 00:13:25.012 Number of Namespaces: 32 00:13:25.012 Compare Command: Supported 00:13:25.012 Write Uncorrectable Command: Not Supported 00:13:25.012 Dataset Management Command: Supported 00:13:25.012 Write Zeroes Command: Supported 00:13:25.012 Set Features Save Field: Not Supported 00:13:25.012 Reservations: Not Supported 00:13:25.012 Timestamp: Not Supported 00:13:25.012 Copy: Supported 00:13:25.012 Volatile Write Cache: Present 00:13:25.012 Atomic Write Unit (Normal): 1 00:13:25.012 Atomic Write Unit (PFail): 1 00:13:25.012 Atomic Compare & Write Unit: 1 00:13:25.012 Fused Compare & Write: Supported 00:13:25.012 Scatter-Gather List 00:13:25.012 SGL Command Set: Supported (Dword aligned) 00:13:25.012 SGL Keyed: Not Supported 00:13:25.012 SGL Bit Bucket Descriptor: Not Supported 00:13:25.012 SGL Metadata Pointer: Not Supported 00:13:25.012 Oversized SGL: Not Supported 00:13:25.012 SGL Metadata Address: Not Supported 00:13:25.012 SGL Offset: Not Supported 00:13:25.012 Transport SGL Data Block: Not Supported 00:13:25.012 Replay Protected Memory Block: Not Supported 00:13:25.012 00:13:25.012 Firmware Slot Information 00:13:25.012 ========================= 00:13:25.012 Active slot: 1 00:13:25.012 Slot 1 Firmware Revision: 25.01 00:13:25.012 00:13:25.012 00:13:25.012 Commands Supported and Effects 00:13:25.012 ============================== 00:13:25.012 Admin Commands 00:13:25.012 -------------- 00:13:25.012 Get Log Page (02h): Supported 00:13:25.012 Identify (06h): Supported 00:13:25.012 Abort (08h): Supported 00:13:25.012 Set Features (09h): Supported 00:13:25.012 Get Features (0Ah): Supported 00:13:25.012 Asynchronous Event Request (0Ch): Supported 00:13:25.012 Keep Alive (18h): Supported 00:13:25.012 I/O Commands 00:13:25.012 ------------ 00:13:25.012 Flush (00h): Supported LBA-Change 00:13:25.012 Write (01h): Supported LBA-Change 00:13:25.012 Read (02h): Supported 00:13:25.012 Compare (05h): Supported 00:13:25.012 Write Zeroes (08h): Supported LBA-Change 00:13:25.012 Dataset Management (09h): Supported LBA-Change 00:13:25.012 Copy (19h): Supported LBA-Change 00:13:25.012 00:13:25.012 Error Log 00:13:25.012 ========= 00:13:25.012 00:13:25.012 Arbitration 00:13:25.012 =========== 00:13:25.012 Arbitration Burst: 1 00:13:25.012 00:13:25.012 Power Management 00:13:25.012 ================ 00:13:25.012 Number of Power States: 1 00:13:25.012 Current Power State: Power State #0 00:13:25.012 Power State #0: 00:13:25.012 Max Power: 0.00 W 00:13:25.012 Non-Operational State: Operational 00:13:25.012 Entry Latency: Not Reported 00:13:25.012 Exit Latency: Not Reported 00:13:25.012 Relative Read Throughput: 0 00:13:25.012 Relative Read Latency: 0 00:13:25.012 Relative Write Throughput: 0 00:13:25.012 Relative Write Latency: 0 00:13:25.012 Idle Power: Not Reported 00:13:25.012 Active Power: Not Reported 00:13:25.012 Non-Operational Permissive Mode: Not Supported 00:13:25.012 00:13:25.012 Health Information 00:13:25.012 ================== 00:13:25.012 Critical Warnings: 00:13:25.012 Available Spare Space: OK 00:13:25.012 Temperature: OK 00:13:25.012 Device Reliability: OK 00:13:25.012 Read Only: No 00:13:25.012 Volatile Memory Backup: OK 00:13:25.012 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:25.012 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:25.012 Available Spare: 0% 00:13:25.012 Available Sp[2024-10-17 16:41:38.618163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:25.013 [2024-10-17 16:41:38.618181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:25.013 [2024-10-17 16:41:38.618227] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:25.013 [2024-10-17 16:41:38.618247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.013 [2024-10-17 16:41:38.618258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.013 [2024-10-17 16:41:38.618268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.013 [2024-10-17 16:41:38.618293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.013 [2024-10-17 16:41:38.622016] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:25.013 [2024-10-17 16:41:38.622039] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:25.013 [2024-10-17 16:41:38.622685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:25.013 [2024-10-17 16:41:38.622771] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:25.013 [2024-10-17 16:41:38.622784] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:25.013 [2024-10-17 16:41:38.623696] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:25.013 [2024-10-17 16:41:38.623720] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:25.013 [2024-10-17 16:41:38.623776] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:25.013 [2024-10-17 16:41:38.625735] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:25.013 are Threshold: 0% 00:13:25.013 Life Percentage Used: 0% 00:13:25.013 Data Units Read: 0 00:13:25.013 Data Units Written: 0 00:13:25.013 Host Read Commands: 0 00:13:25.013 Host Write Commands: 0 00:13:25.013 Controller Busy Time: 0 minutes 00:13:25.013 Power Cycles: 0 00:13:25.013 Power On Hours: 0 hours 00:13:25.013 Unsafe Shutdowns: 0 00:13:25.013 Unrecoverable Media Errors: 0 00:13:25.013 Lifetime Error Log Entries: 0 00:13:25.013 Warning Temperature Time: 0 minutes 00:13:25.013 Critical Temperature Time: 0 minutes 00:13:25.013 00:13:25.013 Number of Queues 00:13:25.013 ================ 00:13:25.013 Number of I/O Submission Queues: 127 00:13:25.013 Number of I/O Completion Queues: 127 00:13:25.013 00:13:25.013 Active Namespaces 00:13:25.013 ================= 00:13:25.013 Namespace ID:1 00:13:25.013 Error Recovery Timeout: Unlimited 00:13:25.013 Command Set Identifier: NVM (00h) 00:13:25.013 Deallocate: Supported 00:13:25.013 Deallocated/Unwritten Error: Not Supported 00:13:25.013 Deallocated Read Value: Unknown 00:13:25.013 Deallocate in Write Zeroes: Not Supported 00:13:25.013 Deallocated Guard Field: 0xFFFF 00:13:25.013 Flush: Supported 00:13:25.013 Reservation: Supported 00:13:25.013 Namespace Sharing Capabilities: Multiple Controllers 00:13:25.013 Size (in LBAs): 131072 (0GiB) 00:13:25.013 Capacity (in LBAs): 131072 (0GiB) 00:13:25.013 Utilization (in LBAs): 131072 (0GiB) 00:13:25.013 NGUID: C6FF40DF0C1C4C058FDF3D6FE0C70C14 00:13:25.013 UUID: c6ff40df-0c1c-4c05-8fdf-3d6fe0c70c14 00:13:25.013 Thin Provisioning: Not Supported 00:13:25.013 Per-NS Atomic Units: Yes 00:13:25.013 Atomic Boundary Size (Normal): 0 00:13:25.013 Atomic Boundary Size (PFail): 0 00:13:25.013 Atomic Boundary Offset: 0 00:13:25.013 Maximum Single Source Range Length: 65535 00:13:25.013 Maximum Copy Length: 65535 00:13:25.013 Maximum Source Range Count: 1 00:13:25.013 NGUID/EUI64 Never Reused: No 00:13:25.013 Namespace Write Protected: No 00:13:25.013 Number of LBA Formats: 1 00:13:25.013 Current LBA Format: LBA Format #00 00:13:25.013 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:25.013 00:13:25.013 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:25.271 [2024-10-17 16:41:38.858872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:30.549 Initializing NVMe Controllers 00:13:30.549 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:30.549 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:30.549 Initialization complete. Launching workers. 00:13:30.549 ======================================================== 00:13:30.549 Latency(us) 00:13:30.549 Device Information : IOPS MiB/s Average min max 00:13:30.549 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32717.79 127.80 3913.00 1172.01 7473.33 00:13:30.549 ======================================================== 00:13:30.549 Total : 32717.79 127.80 3913.00 1172.01 7473.33 00:13:30.549 00:13:30.549 [2024-10-17 16:41:43.881764] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:30.549 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:30.549 [2024-10-17 16:41:44.125942] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:35.890 Initializing NVMe Controllers 00:13:35.890 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:35.890 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:35.890 Initialization complete. Launching workers. 00:13:35.890 ======================================================== 00:13:35.890 Latency(us) 00:13:35.890 Device Information : IOPS MiB/s Average min max 00:13:35.890 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.28 62.72 7976.64 4973.24 10974.01 00:13:35.890 ======================================================== 00:13:35.890 Total : 16057.28 62.72 7976.64 4973.24 10974.01 00:13:35.890 00:13:35.890 [2024-10-17 16:41:49.168588] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:35.890 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:35.890 [2024-10-17 16:41:49.376612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:41.165 [2024-10-17 16:41:54.445434] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:41.165 Initializing NVMe Controllers 00:13:41.165 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:41.165 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:41.165 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:41.165 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:41.165 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:41.165 Initialization complete. Launching workers. 00:13:41.165 Starting thread on core 2 00:13:41.165 Starting thread on core 3 00:13:41.165 Starting thread on core 1 00:13:41.165 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:41.165 [2024-10-17 16:41:54.751467] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.461 [2024-10-17 16:41:57.814873] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:44.461 Initializing NVMe Controllers 00:13:44.461 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:44.461 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:44.461 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:44.461 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:44.461 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:44.461 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:44.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:44.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:44.461 Initialization complete. Launching workers. 00:13:44.461 Starting thread on core 1 with urgent priority queue 00:13:44.461 Starting thread on core 2 with urgent priority queue 00:13:44.461 Starting thread on core 3 with urgent priority queue 00:13:44.461 Starting thread on core 0 with urgent priority queue 00:13:44.461 SPDK bdev Controller (SPDK1 ) core 0: 5450.67 IO/s 18.35 secs/100000 ios 00:13:44.461 SPDK bdev Controller (SPDK1 ) core 1: 5895.67 IO/s 16.96 secs/100000 ios 00:13:44.461 SPDK bdev Controller (SPDK1 ) core 2: 4672.67 IO/s 21.40 secs/100000 ios 00:13:44.461 SPDK bdev Controller (SPDK1 ) core 3: 5284.67 IO/s 18.92 secs/100000 ios 00:13:44.461 ======================================================== 00:13:44.461 00:13:44.461 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:44.461 [2024-10-17 16:41:58.108777] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.461 Initializing NVMe Controllers 00:13:44.461 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:44.461 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:44.461 Namespace ID: 1 size: 0GB 00:13:44.461 Initialization complete. 00:13:44.461 INFO: using host memory buffer for IO 00:13:44.461 Hello world! 00:13:44.461 [2024-10-17 16:41:58.149427] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:44.720 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:44.979 [2024-10-17 16:41:58.444817] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:45.916 Initializing NVMe Controllers 00:13:45.916 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:45.916 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:45.916 Initialization complete. Launching workers. 00:13:45.916 submit (in ns) avg, min, max = 7351.6, 3505.6, 4021894.4 00:13:45.916 complete (in ns) avg, min, max = 27766.8, 2060.0, 4022406.7 00:13:45.916 00:13:45.916 Submit histogram 00:13:45.916 ================ 00:13:45.916 Range in us Cumulative Count 00:13:45.916 3.484 - 3.508: 0.0077% ( 1) 00:13:45.916 3.508 - 3.532: 0.2989% ( 38) 00:13:45.916 3.532 - 3.556: 1.4715% ( 153) 00:13:45.916 3.556 - 3.579: 4.6980% ( 421) 00:13:45.916 3.579 - 3.603: 9.1508% ( 581) 00:13:45.916 3.603 - 3.627: 16.2094% ( 921) 00:13:45.916 3.627 - 3.650: 24.4865% ( 1080) 00:13:45.916 3.650 - 3.674: 32.8556% ( 1092) 00:13:45.916 3.674 - 3.698: 39.7226% ( 896) 00:13:45.916 3.698 - 3.721: 47.5628% ( 1023) 00:13:45.916 3.721 - 3.745: 52.7207% ( 673) 00:13:45.916 3.745 - 3.769: 58.0779% ( 699) 00:13:45.916 3.769 - 3.793: 62.0785% ( 522) 00:13:45.916 3.793 - 3.816: 65.4123% ( 435) 00:13:45.916 3.816 - 3.840: 68.6925% ( 428) 00:13:45.916 3.840 - 3.864: 72.4019% ( 484) 00:13:45.916 3.864 - 3.887: 76.3719% ( 518) 00:13:45.916 3.887 - 3.911: 80.1655% ( 495) 00:13:45.916 3.911 - 3.935: 83.4611% ( 430) 00:13:45.916 3.935 - 3.959: 85.7679% ( 301) 00:13:45.916 3.959 - 3.982: 87.7299% ( 256) 00:13:45.916 3.982 - 4.006: 89.4160% ( 220) 00:13:45.916 4.006 - 4.030: 90.6039% ( 155) 00:13:45.916 4.030 - 4.053: 91.6462% ( 136) 00:13:45.916 4.053 - 4.077: 92.6272% ( 128) 00:13:45.916 4.077 - 4.101: 93.5699% ( 123) 00:13:45.916 4.101 - 4.124: 94.3286% ( 99) 00:13:45.916 4.124 - 4.148: 94.8498% ( 68) 00:13:45.916 4.148 - 4.172: 95.3786% ( 69) 00:13:45.916 4.172 - 4.196: 95.7388% ( 47) 00:13:45.916 4.196 - 4.219: 96.0454% ( 40) 00:13:45.916 4.219 - 4.243: 96.2676% ( 29) 00:13:45.916 4.243 - 4.267: 96.3673% ( 13) 00:13:45.916 4.267 - 4.290: 96.4899% ( 16) 00:13:45.916 4.290 - 4.314: 96.6125% ( 16) 00:13:45.916 4.314 - 4.338: 96.7121% ( 13) 00:13:45.916 4.338 - 4.361: 96.8501% ( 18) 00:13:45.916 4.361 - 4.385: 96.9421% ( 12) 00:13:45.916 4.385 - 4.409: 96.9727% ( 4) 00:13:45.916 4.409 - 4.433: 97.0494% ( 10) 00:13:45.916 4.433 - 4.456: 97.0877% ( 5) 00:13:45.916 4.456 - 4.480: 97.1413% ( 7) 00:13:45.916 4.480 - 4.504: 97.1796% ( 5) 00:13:45.916 4.504 - 4.527: 97.1950% ( 2) 00:13:45.916 4.527 - 4.551: 97.2256% ( 4) 00:13:45.916 4.551 - 4.575: 97.2486% ( 3) 00:13:45.916 4.575 - 4.599: 97.2716% ( 3) 00:13:45.916 4.646 - 4.670: 97.2869% ( 2) 00:13:45.916 4.670 - 4.693: 97.3023% ( 2) 00:13:45.916 4.693 - 4.717: 97.3176% ( 2) 00:13:45.916 4.717 - 4.741: 97.3406% ( 3) 00:13:45.916 4.741 - 4.764: 97.3712% ( 4) 00:13:45.916 4.764 - 4.788: 97.4172% ( 6) 00:13:45.916 4.788 - 4.812: 97.4555% ( 5) 00:13:45.916 4.812 - 4.836: 97.4862% ( 4) 00:13:45.916 4.836 - 4.859: 97.5322% ( 6) 00:13:45.916 4.859 - 4.883: 97.5935% ( 8) 00:13:45.916 4.883 - 4.907: 97.6165% ( 3) 00:13:45.916 4.907 - 4.930: 97.6778% ( 8) 00:13:45.916 4.930 - 4.954: 97.7315% ( 7) 00:13:45.916 4.954 - 4.978: 97.7698% ( 5) 00:13:45.916 4.978 - 5.001: 97.7774% ( 1) 00:13:45.916 5.001 - 5.025: 97.7928% ( 2) 00:13:45.916 5.025 - 5.049: 97.8387% ( 6) 00:13:45.916 5.049 - 5.073: 97.8847% ( 6) 00:13:45.917 5.073 - 5.096: 97.9077% ( 3) 00:13:45.917 5.096 - 5.120: 97.9460% ( 5) 00:13:45.917 5.120 - 5.144: 97.9997% ( 7) 00:13:45.917 5.167 - 5.191: 98.0150% ( 2) 00:13:45.917 5.191 - 5.215: 98.0227% ( 1) 00:13:45.917 5.215 - 5.239: 98.0303% ( 1) 00:13:45.917 5.239 - 5.262: 98.0457% ( 2) 00:13:45.917 5.286 - 5.310: 98.0533% ( 1) 00:13:45.917 5.404 - 5.428: 98.0687% ( 2) 00:13:45.917 5.428 - 5.452: 98.0840% ( 2) 00:13:45.917 5.499 - 5.523: 98.0917% ( 1) 00:13:45.917 5.594 - 5.618: 98.0993% ( 1) 00:13:45.917 5.618 - 5.641: 98.1070% ( 1) 00:13:45.917 5.641 - 5.665: 98.1300% ( 3) 00:13:45.917 5.760 - 5.784: 98.1376% ( 1) 00:13:45.917 5.831 - 5.855: 98.1453% ( 1) 00:13:45.917 5.902 - 5.926: 98.1530% ( 1) 00:13:45.917 6.163 - 6.210: 98.1606% ( 1) 00:13:45.917 6.210 - 6.258: 98.1683% ( 1) 00:13:45.917 6.258 - 6.305: 98.1836% ( 2) 00:13:45.917 6.353 - 6.400: 98.1913% ( 1) 00:13:45.917 6.637 - 6.684: 98.1990% ( 1) 00:13:45.917 6.779 - 6.827: 98.2066% ( 1) 00:13:45.917 6.969 - 7.016: 98.2219% ( 2) 00:13:45.917 7.016 - 7.064: 98.2296% ( 1) 00:13:45.917 7.111 - 7.159: 98.2449% ( 2) 00:13:45.917 7.301 - 7.348: 98.2526% ( 1) 00:13:45.917 7.396 - 7.443: 98.2679% ( 2) 00:13:45.917 7.443 - 7.490: 98.2756% ( 1) 00:13:45.917 7.633 - 7.680: 98.2833% ( 1) 00:13:45.917 7.680 - 7.727: 98.2909% ( 1) 00:13:45.917 7.727 - 7.775: 98.2986% ( 1) 00:13:45.917 7.775 - 7.822: 98.3216% ( 3) 00:13:45.917 7.870 - 7.917: 98.3369% ( 2) 00:13:45.917 7.917 - 7.964: 98.3446% ( 1) 00:13:45.917 7.964 - 8.012: 98.3522% ( 1) 00:13:45.917 8.012 - 8.059: 98.3599% ( 1) 00:13:45.917 8.059 - 8.107: 98.3752% ( 2) 00:13:45.917 8.107 - 8.154: 98.3982% ( 3) 00:13:45.917 8.154 - 8.201: 98.4059% ( 1) 00:13:45.917 8.201 - 8.249: 98.4135% ( 1) 00:13:45.917 8.296 - 8.344: 98.4289% ( 2) 00:13:45.917 8.439 - 8.486: 98.4519% ( 3) 00:13:45.917 8.486 - 8.533: 98.4595% ( 1) 00:13:45.917 8.533 - 8.581: 98.4749% ( 2) 00:13:45.917 8.581 - 8.628: 98.4825% ( 1) 00:13:45.917 8.628 - 8.676: 98.4902% ( 1) 00:13:45.917 8.676 - 8.723: 98.5055% ( 2) 00:13:45.917 8.723 - 8.770: 98.5132% ( 1) 00:13:45.917 8.770 - 8.818: 98.5208% ( 1) 00:13:45.917 8.818 - 8.865: 98.5362% ( 2) 00:13:45.917 8.865 - 8.913: 98.5438% ( 1) 00:13:45.917 9.007 - 9.055: 98.5515% ( 1) 00:13:45.917 9.102 - 9.150: 98.5592% ( 1) 00:13:45.917 9.292 - 9.339: 98.5668% ( 1) 00:13:45.917 9.339 - 9.387: 98.5745% ( 1) 00:13:45.917 9.387 - 9.434: 98.5822% ( 1) 00:13:45.917 9.576 - 9.624: 98.5898% ( 1) 00:13:45.917 9.624 - 9.671: 98.5975% ( 1) 00:13:45.917 9.719 - 9.766: 98.6052% ( 1) 00:13:45.917 9.766 - 9.813: 98.6128% ( 1) 00:13:45.917 9.813 - 9.861: 98.6205% ( 1) 00:13:45.917 9.908 - 9.956: 98.6281% ( 1) 00:13:45.917 10.003 - 10.050: 98.6358% ( 1) 00:13:45.917 10.050 - 10.098: 98.6435% ( 1) 00:13:45.917 10.098 - 10.145: 98.6511% ( 1) 00:13:45.917 10.145 - 10.193: 98.6665% ( 2) 00:13:45.917 10.382 - 10.430: 98.6741% ( 1) 00:13:45.917 10.477 - 10.524: 98.6818% ( 1) 00:13:45.917 10.619 - 10.667: 98.6895% ( 1) 00:13:45.917 10.761 - 10.809: 98.6971% ( 1) 00:13:45.917 11.283 - 11.330: 98.7048% ( 1) 00:13:45.917 11.473 - 11.520: 98.7124% ( 1) 00:13:45.917 11.615 - 11.662: 98.7201% ( 1) 00:13:45.917 11.710 - 11.757: 98.7278% ( 1) 00:13:45.917 11.757 - 11.804: 98.7508% ( 3) 00:13:45.917 11.899 - 11.947: 98.7584% ( 1) 00:13:45.917 12.136 - 12.231: 98.7661% ( 1) 00:13:45.917 12.231 - 12.326: 98.7738% ( 1) 00:13:45.917 12.421 - 12.516: 98.7814% ( 1) 00:13:45.917 12.516 - 12.610: 98.7891% ( 1) 00:13:45.917 12.705 - 12.800: 98.7968% ( 1) 00:13:45.917 12.800 - 12.895: 98.8044% ( 1) 00:13:45.917 13.179 - 13.274: 98.8121% ( 1) 00:13:45.917 13.464 - 13.559: 98.8197% ( 1) 00:13:45.917 13.559 - 13.653: 98.8274% ( 1) 00:13:45.917 13.653 - 13.748: 98.8351% ( 1) 00:13:45.917 13.938 - 14.033: 98.8427% ( 1) 00:13:45.917 14.033 - 14.127: 98.8504% ( 1) 00:13:45.917 14.222 - 14.317: 98.8581% ( 1) 00:13:45.917 14.317 - 14.412: 98.8657% ( 1) 00:13:45.917 14.412 - 14.507: 98.8734% ( 1) 00:13:45.917 14.696 - 14.791: 98.8964% ( 3) 00:13:45.917 14.791 - 14.886: 98.9117% ( 2) 00:13:45.917 14.981 - 15.076: 98.9194% ( 1) 00:13:45.917 15.834 - 15.929: 98.9270% ( 1) 00:13:45.917 17.161 - 17.256: 98.9500% ( 3) 00:13:45.917 17.256 - 17.351: 98.9577% ( 1) 00:13:45.917 17.351 - 17.446: 98.9654% ( 1) 00:13:45.917 17.446 - 17.541: 98.9960% ( 4) 00:13:45.917 17.541 - 17.636: 99.0190% ( 3) 00:13:45.917 17.636 - 17.730: 99.0650% ( 6) 00:13:45.917 17.730 - 17.825: 99.0880% ( 3) 00:13:45.917 17.825 - 17.920: 99.1110% ( 3) 00:13:45.917 17.920 - 18.015: 99.1646% ( 7) 00:13:45.917 18.015 - 18.110: 99.2336% ( 9) 00:13:45.917 18.110 - 18.204: 99.2566% ( 3) 00:13:45.917 18.204 - 18.299: 99.3026% ( 6) 00:13:45.917 18.299 - 18.394: 99.3562% ( 7) 00:13:45.917 18.394 - 18.489: 99.4329% ( 10) 00:13:45.917 18.489 - 18.584: 99.4865% ( 7) 00:13:45.917 18.584 - 18.679: 99.5018% ( 2) 00:13:45.917 18.679 - 18.773: 99.5708% ( 9) 00:13:45.917 18.773 - 18.868: 99.6091% ( 5) 00:13:45.917 18.868 - 18.963: 99.6321% ( 3) 00:13:45.917 18.963 - 19.058: 99.6704% ( 5) 00:13:45.917 19.058 - 19.153: 99.7088% ( 5) 00:13:45.917 19.153 - 19.247: 99.7471% ( 5) 00:13:45.917 19.247 - 19.342: 99.7701% ( 3) 00:13:45.917 19.437 - 19.532: 99.7854% ( 2) 00:13:45.917 19.627 - 19.721: 99.7931% ( 1) 00:13:45.917 19.816 - 19.911: 99.8007% ( 1) 00:13:45.917 19.911 - 20.006: 99.8161% ( 2) 00:13:45.917 20.006 - 20.101: 99.8237% ( 1) 00:13:45.917 20.575 - 20.670: 99.8314% ( 1) 00:13:45.917 21.239 - 21.333: 99.8391% ( 1) 00:13:45.917 23.040 - 23.135: 99.8467% ( 1) 00:13:45.917 24.083 - 24.178: 99.8544% ( 1) 00:13:45.917 24.652 - 24.841: 99.8620% ( 1) 00:13:45.917 25.410 - 25.600: 99.8697% ( 1) 00:13:45.917 25.979 - 26.169: 99.8774% ( 1) 00:13:45.917 27.307 - 27.496: 99.8850% ( 1) 00:13:45.917 27.686 - 27.876: 99.8927% ( 1) 00:13:45.917 28.255 - 28.444: 99.9004% ( 1) 00:13:45.917 29.013 - 29.203: 99.9080% ( 1) 00:13:45.917 36.030 - 36.219: 99.9157% ( 1) 00:13:45.917 3980.705 - 4004.978: 99.9770% ( 8) 00:13:45.917 4004.978 - 4029.250: 100.0000% ( 3) 00:13:45.917 00:13:45.917 Complete histogram 00:13:45.917 ================== 00:13:45.917 Range in us Cumulative Count 00:13:45.917 2.050 - 2.062: 0.0536% ( 7) 00:13:45.917 2.062 - 2.074: 19.6582% ( 2558) 00:13:45.917 2.074 - 2.086: 51.7091% ( 4182) 00:13:45.917 2.086 - 2.098: 55.4874% ( 493) 00:13:45.917 2.098 - 2.110: 58.5530% ( 400) 00:13:45.917 2.110 - 2.121: 61.2048% ( 346) 00:13:45.917 2.121 - 2.133: 63.1131% ( 249) 00:13:45.917 2.133 - 2.145: 73.2679% ( 1325) 00:13:45.917 2.145 - 2.157: 79.7977% ( 852) 00:13:45.917 2.157 - 2.169: 80.7710% ( 127) 00:13:45.917 2.169 - 2.181: 81.9896% ( 159) 00:13:45.917 2.181 - 2.193: 82.9399% ( 124) 00:13:45.917 2.193 - 2.204: 83.6220% ( 89) 00:13:45.917 2.204 - 2.216: 87.2931% ( 479) 00:13:45.917 2.216 - 2.228: 89.8299% ( 331) 00:13:45.917 2.228 - 2.240: 91.7229% ( 247) 00:13:45.917 2.240 - 2.252: 93.0487% ( 173) 00:13:45.917 2.252 - 2.264: 93.5852% ( 70) 00:13:45.917 2.264 - 2.276: 93.8918% ( 40) 00:13:45.917 2.276 - 2.287: 94.1600% ( 35) 00:13:45.917 2.287 - 2.299: 94.5662% ( 53) 00:13:45.917 2.299 - 2.311: 95.1947% ( 82) 00:13:45.917 2.311 - 2.323: 95.6315% ( 57) 00:13:45.917 2.323 - 2.335: 95.7158% ( 11) 00:13:45.917 2.335 - 2.347: 95.7618% ( 6) 00:13:45.917 2.347 - 2.359: 95.8384% ( 10) 00:13:45.918 2.359 - 2.370: 96.0224% ( 24) 00:13:45.918 2.370 - 2.382: 96.3059% ( 37) 00:13:45.918 2.382 - 2.394: 96.7121% ( 53) 00:13:45.918 2.394 - 2.406: 97.0417% ( 43) 00:13:45.918 2.406 - 2.418: 97.2639% ( 29) 00:13:45.918 2.418 - 2.430: 97.4479% ( 24) 00:13:45.918 2.430 - 2.441: 97.6471% ( 26) 00:13:45.918 2.441 - 2.453: 97.7774% ( 17) 00:13:45.918 2.453 - 2.465: 97.9001% ( 16) 00:13:45.918 2.465 - 2.477: 98.0074% ( 14) 00:13:45.918 2.477 - 2.489: 98.0687% ( 8) 00:13:45.918 2.489 - 2.501: 98.1300% ( 8) 00:13:45.918 2.501 - 2.513: 98.1760% ( 6) 00:13:45.918 2.513 - 2.524: 98.2143% ( 5) 00:13:45.918 2.524 - 2.536: 98.2603% ( 6) 00:13:45.918 2.536 - 2.548: 98.3063% ( 6) 00:13:45.918 2.548 - 2.560: 98.3216% ( 2) 00:13:45.918 2.560 - 2.572: 98.3676% ( 6) 00:13:45.918 2.572 - 2.584: 98.3829% ( 2) 00:13:45.918 2.584 - 2.596: 98.3982% ( 2) 00:13:45.918 2.596 - 2.607: 98.4135% ( 2) 00:13:45.918 2.607 - 2.619: 98.4212% ( 1) 00:13:45.918 2.631 - 2.643: 9[2024-10-17 16:41:59.465133] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:45.918 8.4442% ( 3) 00:13:45.918 2.643 - 2.655: 98.4519% ( 1) 00:13:45.918 2.702 - 2.714: 98.4595% ( 1) 00:13:45.918 2.726 - 2.738: 98.4749% ( 2) 00:13:45.918 2.750 - 2.761: 98.4825% ( 1) 00:13:45.918 2.773 - 2.785: 98.4902% ( 1) 00:13:45.918 3.200 - 3.224: 98.4979% ( 1) 00:13:45.918 3.247 - 3.271: 98.5055% ( 1) 00:13:45.918 3.271 - 3.295: 98.5132% ( 1) 00:13:45.918 3.366 - 3.390: 98.5208% ( 1) 00:13:45.918 3.390 - 3.413: 98.5438% ( 3) 00:13:45.918 3.413 - 3.437: 98.5592% ( 2) 00:13:45.918 3.437 - 3.461: 98.5668% ( 1) 00:13:45.918 3.508 - 3.532: 98.5745% ( 1) 00:13:45.918 3.532 - 3.556: 98.5822% ( 1) 00:13:45.918 3.603 - 3.627: 98.5898% ( 1) 00:13:45.918 3.627 - 3.650: 98.5975% ( 1) 00:13:45.918 3.721 - 3.745: 98.6052% ( 1) 00:13:45.918 3.816 - 3.840: 98.6281% ( 3) 00:13:45.918 3.840 - 3.864: 98.6435% ( 2) 00:13:45.918 4.267 - 4.290: 98.6511% ( 1) 00:13:45.918 5.547 - 5.570: 98.6588% ( 1) 00:13:45.918 5.713 - 5.736: 98.6665% ( 1) 00:13:45.918 5.926 - 5.950: 98.6741% ( 1) 00:13:45.918 6.305 - 6.353: 98.6895% ( 2) 00:13:45.918 6.400 - 6.447: 98.7048% ( 2) 00:13:45.918 6.542 - 6.590: 98.7124% ( 1) 00:13:45.918 6.874 - 6.921: 98.7354% ( 3) 00:13:45.918 6.969 - 7.016: 98.7431% ( 1) 00:13:45.918 7.443 - 7.490: 98.7508% ( 1) 00:13:45.918 7.538 - 7.585: 98.7584% ( 1) 00:13:45.918 7.585 - 7.633: 98.7661% ( 1) 00:13:45.918 7.775 - 7.822: 98.7738% ( 1) 00:13:45.918 7.822 - 7.870: 98.7814% ( 1) 00:13:45.918 11.757 - 11.804: 98.7891% ( 1) 00:13:45.918 15.360 - 15.455: 98.7968% ( 1) 00:13:45.918 15.455 - 15.550: 98.8044% ( 1) 00:13:45.918 15.550 - 15.644: 98.8121% ( 1) 00:13:45.918 15.644 - 15.739: 98.8351% ( 3) 00:13:45.918 15.739 - 15.834: 98.8427% ( 1) 00:13:45.918 15.834 - 15.929: 98.8734% ( 4) 00:13:45.918 15.929 - 16.024: 98.8964% ( 3) 00:13:45.918 16.024 - 16.119: 98.9424% ( 6) 00:13:45.918 16.119 - 16.213: 98.9807% ( 5) 00:13:45.918 16.213 - 16.308: 99.0267% ( 6) 00:13:45.918 16.308 - 16.403: 99.0727% ( 6) 00:13:45.918 16.498 - 16.593: 99.0880% ( 2) 00:13:45.918 16.593 - 16.687: 99.1263% ( 5) 00:13:45.918 16.687 - 16.782: 99.1416% ( 2) 00:13:45.918 16.782 - 16.877: 99.1953% ( 7) 00:13:45.918 16.877 - 16.972: 99.2106% ( 2) 00:13:45.918 16.972 - 17.067: 99.2259% ( 2) 00:13:45.918 17.067 - 17.161: 99.2336% ( 1) 00:13:45.918 17.161 - 17.256: 99.2489% ( 2) 00:13:45.918 17.256 - 17.351: 99.2566% ( 1) 00:13:45.918 17.351 - 17.446: 99.2643% ( 1) 00:13:45.918 17.446 - 17.541: 99.2872% ( 3) 00:13:45.918 17.636 - 17.730: 99.3026% ( 2) 00:13:45.918 18.015 - 18.110: 99.3102% ( 1) 00:13:45.918 18.110 - 18.204: 99.3256% ( 2) 00:13:45.918 18.204 - 18.299: 99.3332% ( 1) 00:13:45.918 18.299 - 18.394: 99.3409% ( 1) 00:13:45.918 18.489 - 18.584: 99.3486% ( 1) 00:13:45.918 97.090 - 97.849: 99.3562% ( 1) 00:13:45.918 1529.173 - 1535.241: 99.3639% ( 1) 00:13:45.918 3762.252 - 3786.524: 99.3716% ( 1) 00:13:45.918 3980.705 - 4004.978: 99.8774% ( 66) 00:13:45.918 4004.978 - 4029.250: 100.0000% ( 16) 00:13:45.918 00:13:45.918 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:45.918 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:45.918 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:45.918 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:45.918 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:46.177 [ 00:13:46.177 { 00:13:46.177 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:46.177 "subtype": "Discovery", 00:13:46.177 "listen_addresses": [], 00:13:46.177 "allow_any_host": true, 00:13:46.177 "hosts": [] 00:13:46.177 }, 00:13:46.177 { 00:13:46.177 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:46.177 "subtype": "NVMe", 00:13:46.177 "listen_addresses": [ 00:13:46.177 { 00:13:46.177 "trtype": "VFIOUSER", 00:13:46.177 "adrfam": "IPv4", 00:13:46.177 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:46.177 "trsvcid": "0" 00:13:46.177 } 00:13:46.177 ], 00:13:46.177 "allow_any_host": true, 00:13:46.177 "hosts": [], 00:13:46.177 "serial_number": "SPDK1", 00:13:46.177 "model_number": "SPDK bdev Controller", 00:13:46.177 "max_namespaces": 32, 00:13:46.177 "min_cntlid": 1, 00:13:46.177 "max_cntlid": 65519, 00:13:46.177 "namespaces": [ 00:13:46.177 { 00:13:46.177 "nsid": 1, 00:13:46.177 "bdev_name": "Malloc1", 00:13:46.177 "name": "Malloc1", 00:13:46.177 "nguid": "C6FF40DF0C1C4C058FDF3D6FE0C70C14", 00:13:46.177 "uuid": "c6ff40df-0c1c-4c05-8fdf-3d6fe0c70c14" 00:13:46.177 } 00:13:46.177 ] 00:13:46.177 }, 00:13:46.177 { 00:13:46.177 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:46.177 "subtype": "NVMe", 00:13:46.177 "listen_addresses": [ 00:13:46.177 { 00:13:46.177 "trtype": "VFIOUSER", 00:13:46.177 "adrfam": "IPv4", 00:13:46.177 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:46.177 "trsvcid": "0" 00:13:46.177 } 00:13:46.177 ], 00:13:46.177 "allow_any_host": true, 00:13:46.177 "hosts": [], 00:13:46.177 "serial_number": "SPDK2", 00:13:46.177 "model_number": "SPDK bdev Controller", 00:13:46.177 "max_namespaces": 32, 00:13:46.177 "min_cntlid": 1, 00:13:46.177 "max_cntlid": 65519, 00:13:46.177 "namespaces": [ 00:13:46.177 { 00:13:46.177 "nsid": 1, 00:13:46.177 "bdev_name": "Malloc2", 00:13:46.177 "name": "Malloc2", 00:13:46.177 "nguid": "92A061773EE742B18510F1A51F2FB0F4", 00:13:46.177 "uuid": "92a06177-3ee7-42b1-8510-f1a51f2fb0f4" 00:13:46.177 } 00:13:46.177 ] 00:13:46.177 } 00:13:46.177 ] 00:13:46.177 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:46.177 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2329285 00:13:46.177 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:46.177 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:46.177 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:46.177 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:46.177 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:46.177 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:46.177 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:46.177 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:46.436 [2024-10-17 16:41:59.989462] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:46.695 Malloc3 00:13:46.695 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:46.954 [2024-10-17 16:42:00.432865] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:46.954 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:46.954 Asynchronous Event Request test 00:13:46.954 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.954 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.954 Registering asynchronous event callbacks... 00:13:46.954 Starting namespace attribute notice tests for all controllers... 00:13:46.954 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:46.954 aer_cb - Changed Namespace 00:13:46.954 Cleaning up... 00:13:47.215 [ 00:13:47.215 { 00:13:47.215 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:47.215 "subtype": "Discovery", 00:13:47.215 "listen_addresses": [], 00:13:47.215 "allow_any_host": true, 00:13:47.215 "hosts": [] 00:13:47.215 }, 00:13:47.215 { 00:13:47.215 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:47.215 "subtype": "NVMe", 00:13:47.215 "listen_addresses": [ 00:13:47.215 { 00:13:47.215 "trtype": "VFIOUSER", 00:13:47.215 "adrfam": "IPv4", 00:13:47.215 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:47.215 "trsvcid": "0" 00:13:47.215 } 00:13:47.215 ], 00:13:47.215 "allow_any_host": true, 00:13:47.215 "hosts": [], 00:13:47.215 "serial_number": "SPDK1", 00:13:47.215 "model_number": "SPDK bdev Controller", 00:13:47.215 "max_namespaces": 32, 00:13:47.215 "min_cntlid": 1, 00:13:47.215 "max_cntlid": 65519, 00:13:47.215 "namespaces": [ 00:13:47.215 { 00:13:47.215 "nsid": 1, 00:13:47.215 "bdev_name": "Malloc1", 00:13:47.215 "name": "Malloc1", 00:13:47.215 "nguid": "C6FF40DF0C1C4C058FDF3D6FE0C70C14", 00:13:47.215 "uuid": "c6ff40df-0c1c-4c05-8fdf-3d6fe0c70c14" 00:13:47.215 }, 00:13:47.215 { 00:13:47.215 "nsid": 2, 00:13:47.215 "bdev_name": "Malloc3", 00:13:47.215 "name": "Malloc3", 00:13:47.215 "nguid": "9BB745019BB24725BE0063F08CD9CC37", 00:13:47.215 "uuid": "9bb74501-9bb2-4725-be00-63f08cd9cc37" 00:13:47.215 } 00:13:47.215 ] 00:13:47.215 }, 00:13:47.215 { 00:13:47.215 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:47.215 "subtype": "NVMe", 00:13:47.215 "listen_addresses": [ 00:13:47.215 { 00:13:47.215 "trtype": "VFIOUSER", 00:13:47.215 "adrfam": "IPv4", 00:13:47.215 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:47.215 "trsvcid": "0" 00:13:47.215 } 00:13:47.215 ], 00:13:47.215 "allow_any_host": true, 00:13:47.215 "hosts": [], 00:13:47.215 "serial_number": "SPDK2", 00:13:47.215 "model_number": "SPDK bdev Controller", 00:13:47.215 "max_namespaces": 32, 00:13:47.215 "min_cntlid": 1, 00:13:47.215 "max_cntlid": 65519, 00:13:47.215 "namespaces": [ 00:13:47.215 { 00:13:47.215 "nsid": 1, 00:13:47.216 "bdev_name": "Malloc2", 00:13:47.216 "name": "Malloc2", 00:13:47.216 "nguid": "92A061773EE742B18510F1A51F2FB0F4", 00:13:47.216 "uuid": "92a06177-3ee7-42b1-8510-f1a51f2fb0f4" 00:13:47.216 } 00:13:47.216 ] 00:13:47.216 } 00:13:47.216 ] 00:13:47.216 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2329285 00:13:47.216 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:47.216 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:47.216 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:47.216 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:47.216 [2024-10-17 16:42:00.770484] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:13:47.216 [2024-10-17 16:42:00.770528] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329475 ] 00:13:47.216 [2024-10-17 16:42:00.803619] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:47.216 [2024-10-17 16:42:00.806959] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:47.216 [2024-10-17 16:42:00.806995] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcb909d0000 00:13:47.216 [2024-10-17 16:42:00.807956] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:47.216 [2024-10-17 16:42:00.808962] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:47.216 [2024-10-17 16:42:00.809966] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:47.216 [2024-10-17 16:42:00.810989] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:47.216 [2024-10-17 16:42:00.811996] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:47.216 [2024-10-17 16:42:00.813022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:47.216 [2024-10-17 16:42:00.817013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:47.216 [2024-10-17 16:42:00.818020] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:47.216 [2024-10-17 16:42:00.819027] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:47.216 [2024-10-17 16:42:00.819061] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcb909c5000 00:13:47.216 [2024-10-17 16:42:00.820226] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:47.216 [2024-10-17 16:42:00.837234] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:47.216 [2024-10-17 16:42:00.837274] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:47.216 [2024-10-17 16:42:00.839393] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:47.216 [2024-10-17 16:42:00.839450] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:47.216 [2024-10-17 16:42:00.839540] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:47.216 [2024-10-17 16:42:00.839568] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:47.216 [2024-10-17 16:42:00.839579] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:47.216 [2024-10-17 16:42:00.840398] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:47.216 [2024-10-17 16:42:00.840420] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:47.216 [2024-10-17 16:42:00.840433] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:47.216 [2024-10-17 16:42:00.841421] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:47.216 [2024-10-17 16:42:00.841442] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:47.216 [2024-10-17 16:42:00.841456] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:47.216 [2024-10-17 16:42:00.842429] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:47.216 [2024-10-17 16:42:00.842453] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:47.216 [2024-10-17 16:42:00.843431] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:47.216 [2024-10-17 16:42:00.843453] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:47.216 [2024-10-17 16:42:00.843463] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:47.216 [2024-10-17 16:42:00.843489] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:47.216 [2024-10-17 16:42:00.843599] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:47.216 [2024-10-17 16:42:00.843608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:47.216 [2024-10-17 16:42:00.843616] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:47.216 [2024-10-17 16:42:00.844441] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:47.216 [2024-10-17 16:42:00.845466] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:47.216 [2024-10-17 16:42:00.846455] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:47.216 [2024-10-17 16:42:00.847453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:47.216 [2024-10-17 16:42:00.847535] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:47.216 [2024-10-17 16:42:00.848474] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:47.216 [2024-10-17 16:42:00.848494] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:47.216 [2024-10-17 16:42:00.848504] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:47.216 [2024-10-17 16:42:00.848529] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:47.216 [2024-10-17 16:42:00.848543] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:47.216 [2024-10-17 16:42:00.848568] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:47.216 [2024-10-17 16:42:00.848579] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:47.216 [2024-10-17 16:42:00.848585] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:47.216 [2024-10-17 16:42:00.848605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:47.216 [2024-10-17 16:42:00.855017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:47.216 [2024-10-17 16:42:00.855041] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:47.216 [2024-10-17 16:42:00.855051] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:47.216 [2024-10-17 16:42:00.855063] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:47.216 [2024-10-17 16:42:00.855072] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:47.216 [2024-10-17 16:42:00.855082] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:47.216 [2024-10-17 16:42:00.855089] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:47.216 [2024-10-17 16:42:00.855097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:47.216 [2024-10-17 16:42:00.855115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:47.216 [2024-10-17 16:42:00.855136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:47.216 [2024-10-17 16:42:00.863013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:47.216 [2024-10-17 16:42:00.863039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.216 [2024-10-17 16:42:00.863052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.216 [2024-10-17 16:42:00.863064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.216 [2024-10-17 16:42:00.863075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.216 [2024-10-17 16:42:00.863084] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:47.216 [2024-10-17 16:42:00.863100] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:47.216 [2024-10-17 16:42:00.863115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:47.216 [2024-10-17 16:42:00.871016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:47.216 [2024-10-17 16:42:00.871035] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:47.216 [2024-10-17 16:42:00.871044] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:47.216 [2024-10-17 16:42:00.871055] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:47.216 [2024-10-17 16:42:00.871066] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.871079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:47.217 [2024-10-17 16:42:00.879014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:47.217 [2024-10-17 16:42:00.879089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.879111] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.879125] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:47.217 [2024-10-17 16:42:00.879138] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:47.217 [2024-10-17 16:42:00.879144] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:47.217 [2024-10-17 16:42:00.879154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:47.217 [2024-10-17 16:42:00.887012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:47.217 [2024-10-17 16:42:00.887050] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:47.217 [2024-10-17 16:42:00.887070] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.887085] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.887098] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:47.217 [2024-10-17 16:42:00.887106] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:47.217 [2024-10-17 16:42:00.887112] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:47.217 [2024-10-17 16:42:00.887122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:47.217 [2024-10-17 16:42:00.895015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:47.217 [2024-10-17 16:42:00.895048] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.895063] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.895076] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:47.217 [2024-10-17 16:42:00.895084] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:47.217 [2024-10-17 16:42:00.895090] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:47.217 [2024-10-17 16:42:00.895111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:47.217 [2024-10-17 16:42:00.903027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:47.217 [2024-10-17 16:42:00.903055] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.903070] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.903099] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.903110] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.903118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.903126] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.903134] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:47.217 [2024-10-17 16:42:00.903146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:47.217 [2024-10-17 16:42:00.903154] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:47.217 [2024-10-17 16:42:00.903179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:47.478 [2024-10-17 16:42:00.911013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:47.478 [2024-10-17 16:42:00.911040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:47.478 [2024-10-17 16:42:00.919013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:47.478 [2024-10-17 16:42:00.919045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:47.478 [2024-10-17 16:42:00.927014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:47.478 [2024-10-17 16:42:00.927044] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:47.478 [2024-10-17 16:42:00.935013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:47.478 [2024-10-17 16:42:00.935045] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:47.478 [2024-10-17 16:42:00.935056] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:47.478 [2024-10-17 16:42:00.935062] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:47.478 [2024-10-17 16:42:00.935068] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:47.478 [2024-10-17 16:42:00.935074] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:47.478 [2024-10-17 16:42:00.935084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:47.478 [2024-10-17 16:42:00.935096] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:47.478 [2024-10-17 16:42:00.935104] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:47.478 [2024-10-17 16:42:00.935110] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:47.478 [2024-10-17 16:42:00.935119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:47.478 [2024-10-17 16:42:00.935130] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:47.478 [2024-10-17 16:42:00.935138] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:47.479 [2024-10-17 16:42:00.935143] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:47.479 [2024-10-17 16:42:00.935152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:47.479 [2024-10-17 16:42:00.935164] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:47.479 [2024-10-17 16:42:00.935172] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:47.479 [2024-10-17 16:42:00.935178] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:47.479 [2024-10-17 16:42:00.935187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:47.479 [2024-10-17 16:42:00.943011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:47.479 [2024-10-17 16:42:00.943039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:47.479 [2024-10-17 16:42:00.943057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:47.479 [2024-10-17 16:42:00.943069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:47.479 ===================================================== 00:13:47.479 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:47.479 ===================================================== 00:13:47.479 Controller Capabilities/Features 00:13:47.479 ================================ 00:13:47.479 Vendor ID: 4e58 00:13:47.479 Subsystem Vendor ID: 4e58 00:13:47.479 Serial Number: SPDK2 00:13:47.479 Model Number: SPDK bdev Controller 00:13:47.479 Firmware Version: 25.01 00:13:47.479 Recommended Arb Burst: 6 00:13:47.479 IEEE OUI Identifier: 8d 6b 50 00:13:47.479 Multi-path I/O 00:13:47.479 May have multiple subsystem ports: Yes 00:13:47.479 May have multiple controllers: Yes 00:13:47.479 Associated with SR-IOV VF: No 00:13:47.479 Max Data Transfer Size: 131072 00:13:47.479 Max Number of Namespaces: 32 00:13:47.479 Max Number of I/O Queues: 127 00:13:47.479 NVMe Specification Version (VS): 1.3 00:13:47.479 NVMe Specification Version (Identify): 1.3 00:13:47.479 Maximum Queue Entries: 256 00:13:47.479 Contiguous Queues Required: Yes 00:13:47.479 Arbitration Mechanisms Supported 00:13:47.479 Weighted Round Robin: Not Supported 00:13:47.479 Vendor Specific: Not Supported 00:13:47.479 Reset Timeout: 15000 ms 00:13:47.479 Doorbell Stride: 4 bytes 00:13:47.479 NVM Subsystem Reset: Not Supported 00:13:47.479 Command Sets Supported 00:13:47.479 NVM Command Set: Supported 00:13:47.479 Boot Partition: Not Supported 00:13:47.479 Memory Page Size Minimum: 4096 bytes 00:13:47.479 Memory Page Size Maximum: 4096 bytes 00:13:47.479 Persistent Memory Region: Not Supported 00:13:47.479 Optional Asynchronous Events Supported 00:13:47.479 Namespace Attribute Notices: Supported 00:13:47.479 Firmware Activation Notices: Not Supported 00:13:47.479 ANA Change Notices: Not Supported 00:13:47.479 PLE Aggregate Log Change Notices: Not Supported 00:13:47.479 LBA Status Info Alert Notices: Not Supported 00:13:47.479 EGE Aggregate Log Change Notices: Not Supported 00:13:47.479 Normal NVM Subsystem Shutdown event: Not Supported 00:13:47.479 Zone Descriptor Change Notices: Not Supported 00:13:47.479 Discovery Log Change Notices: Not Supported 00:13:47.479 Controller Attributes 00:13:47.479 128-bit Host Identifier: Supported 00:13:47.479 Non-Operational Permissive Mode: Not Supported 00:13:47.479 NVM Sets: Not Supported 00:13:47.479 Read Recovery Levels: Not Supported 00:13:47.479 Endurance Groups: Not Supported 00:13:47.479 Predictable Latency Mode: Not Supported 00:13:47.479 Traffic Based Keep ALive: Not Supported 00:13:47.479 Namespace Granularity: Not Supported 00:13:47.479 SQ Associations: Not Supported 00:13:47.479 UUID List: Not Supported 00:13:47.479 Multi-Domain Subsystem: Not Supported 00:13:47.479 Fixed Capacity Management: Not Supported 00:13:47.479 Variable Capacity Management: Not Supported 00:13:47.479 Delete Endurance Group: Not Supported 00:13:47.479 Delete NVM Set: Not Supported 00:13:47.479 Extended LBA Formats Supported: Not Supported 00:13:47.479 Flexible Data Placement Supported: Not Supported 00:13:47.479 00:13:47.479 Controller Memory Buffer Support 00:13:47.479 ================================ 00:13:47.479 Supported: No 00:13:47.479 00:13:47.479 Persistent Memory Region Support 00:13:47.479 ================================ 00:13:47.479 Supported: No 00:13:47.479 00:13:47.479 Admin Command Set Attributes 00:13:47.479 ============================ 00:13:47.479 Security Send/Receive: Not Supported 00:13:47.479 Format NVM: Not Supported 00:13:47.479 Firmware Activate/Download: Not Supported 00:13:47.479 Namespace Management: Not Supported 00:13:47.479 Device Self-Test: Not Supported 00:13:47.479 Directives: Not Supported 00:13:47.479 NVMe-MI: Not Supported 00:13:47.479 Virtualization Management: Not Supported 00:13:47.479 Doorbell Buffer Config: Not Supported 00:13:47.479 Get LBA Status Capability: Not Supported 00:13:47.479 Command & Feature Lockdown Capability: Not Supported 00:13:47.479 Abort Command Limit: 4 00:13:47.479 Async Event Request Limit: 4 00:13:47.479 Number of Firmware Slots: N/A 00:13:47.479 Firmware Slot 1 Read-Only: N/A 00:13:47.479 Firmware Activation Without Reset: N/A 00:13:47.479 Multiple Update Detection Support: N/A 00:13:47.479 Firmware Update Granularity: No Information Provided 00:13:47.479 Per-Namespace SMART Log: No 00:13:47.479 Asymmetric Namespace Access Log Page: Not Supported 00:13:47.479 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:47.479 Command Effects Log Page: Supported 00:13:47.479 Get Log Page Extended Data: Supported 00:13:47.479 Telemetry Log Pages: Not Supported 00:13:47.479 Persistent Event Log Pages: Not Supported 00:13:47.479 Supported Log Pages Log Page: May Support 00:13:47.479 Commands Supported & Effects Log Page: Not Supported 00:13:47.479 Feature Identifiers & Effects Log Page:May Support 00:13:47.479 NVMe-MI Commands & Effects Log Page: May Support 00:13:47.479 Data Area 4 for Telemetry Log: Not Supported 00:13:47.479 Error Log Page Entries Supported: 128 00:13:47.479 Keep Alive: Supported 00:13:47.479 Keep Alive Granularity: 10000 ms 00:13:47.479 00:13:47.479 NVM Command Set Attributes 00:13:47.479 ========================== 00:13:47.479 Submission Queue Entry Size 00:13:47.479 Max: 64 00:13:47.479 Min: 64 00:13:47.479 Completion Queue Entry Size 00:13:47.479 Max: 16 00:13:47.479 Min: 16 00:13:47.479 Number of Namespaces: 32 00:13:47.479 Compare Command: Supported 00:13:47.479 Write Uncorrectable Command: Not Supported 00:13:47.479 Dataset Management Command: Supported 00:13:47.479 Write Zeroes Command: Supported 00:13:47.479 Set Features Save Field: Not Supported 00:13:47.479 Reservations: Not Supported 00:13:47.479 Timestamp: Not Supported 00:13:47.479 Copy: Supported 00:13:47.479 Volatile Write Cache: Present 00:13:47.479 Atomic Write Unit (Normal): 1 00:13:47.479 Atomic Write Unit (PFail): 1 00:13:47.479 Atomic Compare & Write Unit: 1 00:13:47.479 Fused Compare & Write: Supported 00:13:47.479 Scatter-Gather List 00:13:47.479 SGL Command Set: Supported (Dword aligned) 00:13:47.479 SGL Keyed: Not Supported 00:13:47.479 SGL Bit Bucket Descriptor: Not Supported 00:13:47.479 SGL Metadata Pointer: Not Supported 00:13:47.479 Oversized SGL: Not Supported 00:13:47.479 SGL Metadata Address: Not Supported 00:13:47.479 SGL Offset: Not Supported 00:13:47.479 Transport SGL Data Block: Not Supported 00:13:47.479 Replay Protected Memory Block: Not Supported 00:13:47.479 00:13:47.479 Firmware Slot Information 00:13:47.479 ========================= 00:13:47.479 Active slot: 1 00:13:47.479 Slot 1 Firmware Revision: 25.01 00:13:47.479 00:13:47.479 00:13:47.479 Commands Supported and Effects 00:13:47.479 ============================== 00:13:47.479 Admin Commands 00:13:47.479 -------------- 00:13:47.479 Get Log Page (02h): Supported 00:13:47.479 Identify (06h): Supported 00:13:47.479 Abort (08h): Supported 00:13:47.479 Set Features (09h): Supported 00:13:47.479 Get Features (0Ah): Supported 00:13:47.479 Asynchronous Event Request (0Ch): Supported 00:13:47.479 Keep Alive (18h): Supported 00:13:47.479 I/O Commands 00:13:47.479 ------------ 00:13:47.479 Flush (00h): Supported LBA-Change 00:13:47.479 Write (01h): Supported LBA-Change 00:13:47.479 Read (02h): Supported 00:13:47.479 Compare (05h): Supported 00:13:47.479 Write Zeroes (08h): Supported LBA-Change 00:13:47.479 Dataset Management (09h): Supported LBA-Change 00:13:47.479 Copy (19h): Supported LBA-Change 00:13:47.479 00:13:47.479 Error Log 00:13:47.479 ========= 00:13:47.479 00:13:47.479 Arbitration 00:13:47.479 =========== 00:13:47.479 Arbitration Burst: 1 00:13:47.479 00:13:47.479 Power Management 00:13:47.479 ================ 00:13:47.479 Number of Power States: 1 00:13:47.479 Current Power State: Power State #0 00:13:47.479 Power State #0: 00:13:47.479 Max Power: 0.00 W 00:13:47.479 Non-Operational State: Operational 00:13:47.479 Entry Latency: Not Reported 00:13:47.479 Exit Latency: Not Reported 00:13:47.480 Relative Read Throughput: 0 00:13:47.480 Relative Read Latency: 0 00:13:47.480 Relative Write Throughput: 0 00:13:47.480 Relative Write Latency: 0 00:13:47.480 Idle Power: Not Reported 00:13:47.480 Active Power: Not Reported 00:13:47.480 Non-Operational Permissive Mode: Not Supported 00:13:47.480 00:13:47.480 Health Information 00:13:47.480 ================== 00:13:47.480 Critical Warnings: 00:13:47.480 Available Spare Space: OK 00:13:47.480 Temperature: OK 00:13:47.480 Device Reliability: OK 00:13:47.480 Read Only: No 00:13:47.480 Volatile Memory Backup: OK 00:13:47.480 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:47.480 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:47.480 Available Spare: 0% 00:13:47.480 Available Sp[2024-10-17 16:42:00.943187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:47.480 [2024-10-17 16:42:00.951013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:47.480 [2024-10-17 16:42:00.951064] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:47.480 [2024-10-17 16:42:00.951082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.480 [2024-10-17 16:42:00.951093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.480 [2024-10-17 16:42:00.951103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.480 [2024-10-17 16:42:00.951113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.480 [2024-10-17 16:42:00.951191] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:47.480 [2024-10-17 16:42:00.951212] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:47.480 [2024-10-17 16:42:00.952190] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:47.480 [2024-10-17 16:42:00.952280] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:47.480 [2024-10-17 16:42:00.952311] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:47.480 [2024-10-17 16:42:00.953205] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:47.480 [2024-10-17 16:42:00.953230] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:47.480 [2024-10-17 16:42:00.953301] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:47.480 [2024-10-17 16:42:00.956027] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:47.480 are Threshold: 0% 00:13:47.480 Life Percentage Used: 0% 00:13:47.480 Data Units Read: 0 00:13:47.480 Data Units Written: 0 00:13:47.480 Host Read Commands: 0 00:13:47.480 Host Write Commands: 0 00:13:47.480 Controller Busy Time: 0 minutes 00:13:47.480 Power Cycles: 0 00:13:47.480 Power On Hours: 0 hours 00:13:47.480 Unsafe Shutdowns: 0 00:13:47.480 Unrecoverable Media Errors: 0 00:13:47.480 Lifetime Error Log Entries: 0 00:13:47.480 Warning Temperature Time: 0 minutes 00:13:47.480 Critical Temperature Time: 0 minutes 00:13:47.480 00:13:47.480 Number of Queues 00:13:47.480 ================ 00:13:47.480 Number of I/O Submission Queues: 127 00:13:47.480 Number of I/O Completion Queues: 127 00:13:47.480 00:13:47.480 Active Namespaces 00:13:47.480 ================= 00:13:47.480 Namespace ID:1 00:13:47.480 Error Recovery Timeout: Unlimited 00:13:47.480 Command Set Identifier: NVM (00h) 00:13:47.480 Deallocate: Supported 00:13:47.480 Deallocated/Unwritten Error: Not Supported 00:13:47.480 Deallocated Read Value: Unknown 00:13:47.480 Deallocate in Write Zeroes: Not Supported 00:13:47.480 Deallocated Guard Field: 0xFFFF 00:13:47.480 Flush: Supported 00:13:47.480 Reservation: Supported 00:13:47.480 Namespace Sharing Capabilities: Multiple Controllers 00:13:47.480 Size (in LBAs): 131072 (0GiB) 00:13:47.480 Capacity (in LBAs): 131072 (0GiB) 00:13:47.480 Utilization (in LBAs): 131072 (0GiB) 00:13:47.480 NGUID: 92A061773EE742B18510F1A51F2FB0F4 00:13:47.480 UUID: 92a06177-3ee7-42b1-8510-f1a51f2fb0f4 00:13:47.480 Thin Provisioning: Not Supported 00:13:47.480 Per-NS Atomic Units: Yes 00:13:47.480 Atomic Boundary Size (Normal): 0 00:13:47.480 Atomic Boundary Size (PFail): 0 00:13:47.480 Atomic Boundary Offset: 0 00:13:47.480 Maximum Single Source Range Length: 65535 00:13:47.480 Maximum Copy Length: 65535 00:13:47.480 Maximum Source Range Count: 1 00:13:47.480 NGUID/EUI64 Never Reused: No 00:13:47.480 Namespace Write Protected: No 00:13:47.480 Number of LBA Formats: 1 00:13:47.480 Current LBA Format: LBA Format #00 00:13:47.480 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:47.480 00:13:47.480 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:47.739 [2024-10-17 16:42:01.185041] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:53.014 Initializing NVMe Controllers 00:13:53.014 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:53.014 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:53.014 Initialization complete. Launching workers. 00:13:53.014 ======================================================== 00:13:53.014 Latency(us) 00:13:53.014 Device Information : IOPS MiB/s Average min max 00:13:53.014 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33003.92 128.92 3877.68 1189.03 7430.05 00:13:53.014 ======================================================== 00:13:53.014 Total : 33003.92 128.92 3877.68 1189.03 7430.05 00:13:53.014 00:13:53.014 [2024-10-17 16:42:06.292347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:53.014 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:53.014 [2024-10-17 16:42:06.527061] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:58.286 Initializing NVMe Controllers 00:13:58.286 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:58.286 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:58.286 Initialization complete. Launching workers. 00:13:58.286 ======================================================== 00:13:58.286 Latency(us) 00:13:58.286 Device Information : IOPS MiB/s Average min max 00:13:58.286 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31024.18 121.19 4125.52 1217.26 10325.37 00:13:58.286 ======================================================== 00:13:58.286 Total : 31024.18 121.19 4125.52 1217.26 10325.37 00:13:58.287 00:13:58.287 [2024-10-17 16:42:11.548433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:58.287 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:58.287 [2024-10-17 16:42:11.760311] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:03.564 [2024-10-17 16:42:16.887152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:03.564 Initializing NVMe Controllers 00:14:03.564 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:03.564 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:03.564 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:03.564 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:03.564 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:03.564 Initialization complete. Launching workers. 00:14:03.564 Starting thread on core 2 00:14:03.564 Starting thread on core 3 00:14:03.564 Starting thread on core 1 00:14:03.564 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:03.564 [2024-10-17 16:42:17.205492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:07.762 [2024-10-17 16:42:20.855336] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:07.762 Initializing NVMe Controllers 00:14:07.762 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:07.762 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:07.762 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:07.762 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:07.762 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:07.762 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:07.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:07.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:07.762 Initialization complete. Launching workers. 00:14:07.762 Starting thread on core 1 with urgent priority queue 00:14:07.762 Starting thread on core 2 with urgent priority queue 00:14:07.762 Starting thread on core 3 with urgent priority queue 00:14:07.762 Starting thread on core 0 with urgent priority queue 00:14:07.762 SPDK bdev Controller (SPDK2 ) core 0: 3491.67 IO/s 28.64 secs/100000 ios 00:14:07.762 SPDK bdev Controller (SPDK2 ) core 1: 3534.33 IO/s 28.29 secs/100000 ios 00:14:07.762 SPDK bdev Controller (SPDK2 ) core 2: 3601.00 IO/s 27.77 secs/100000 ios 00:14:07.762 SPDK bdev Controller (SPDK2 ) core 3: 3287.67 IO/s 30.42 secs/100000 ios 00:14:07.762 ======================================================== 00:14:07.762 00:14:07.762 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:07.762 [2024-10-17 16:42:21.163526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:07.762 Initializing NVMe Controllers 00:14:07.762 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:07.762 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:07.762 Namespace ID: 1 size: 0GB 00:14:07.762 Initialization complete. 00:14:07.762 INFO: using host memory buffer for IO 00:14:07.762 Hello world! 00:14:07.762 [2024-10-17 16:42:21.171594] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:07.762 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:08.020 [2024-10-17 16:42:21.456483] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:08.954 Initializing NVMe Controllers 00:14:08.954 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.954 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.954 Initialization complete. Launching workers. 00:14:08.954 submit (in ns) avg, min, max = 5816.7, 3490.0, 4002414.4 00:14:08.954 complete (in ns) avg, min, max = 30400.7, 2066.7, 8008213.3 00:14:08.954 00:14:08.954 Submit histogram 00:14:08.954 ================ 00:14:08.954 Range in us Cumulative Count 00:14:08.954 3.484 - 3.508: 0.0157% ( 2) 00:14:08.954 3.508 - 3.532: 0.6730% ( 84) 00:14:08.954 3.532 - 3.556: 3.0365% ( 302) 00:14:08.954 3.556 - 3.579: 7.5364% ( 575) 00:14:08.954 3.579 - 3.603: 15.0650% ( 962) 00:14:08.954 3.603 - 3.627: 23.2118% ( 1041) 00:14:08.954 3.627 - 3.650: 32.3212% ( 1164) 00:14:08.954 3.650 - 3.674: 41.0315% ( 1113) 00:14:08.954 3.674 - 3.698: 47.1592% ( 783) 00:14:08.954 3.698 - 3.721: 53.8425% ( 854) 00:14:08.954 3.721 - 3.745: 57.9746% ( 528) 00:14:08.954 3.745 - 3.769: 62.1537% ( 534) 00:14:08.954 3.769 - 3.793: 65.2293% ( 393) 00:14:08.954 3.793 - 3.816: 68.6258% ( 434) 00:14:08.954 3.816 - 3.840: 72.1396% ( 449) 00:14:08.954 3.840 - 3.864: 76.0213% ( 496) 00:14:08.954 3.864 - 3.887: 79.9343% ( 500) 00:14:08.954 3.887 - 3.911: 83.3229% ( 433) 00:14:08.954 3.911 - 3.935: 86.3437% ( 386) 00:14:08.954 3.935 - 3.959: 88.2767% ( 247) 00:14:08.954 3.959 - 3.982: 89.7010% ( 182) 00:14:08.954 3.982 - 4.006: 90.8358% ( 145) 00:14:08.954 4.006 - 4.030: 91.8688% ( 132) 00:14:08.954 4.030 - 4.053: 92.6984% ( 106) 00:14:08.954 4.053 - 4.077: 93.5436% ( 108) 00:14:08.954 4.077 - 4.101: 94.2792% ( 94) 00:14:08.954 4.101 - 4.124: 94.9757% ( 89) 00:14:08.954 4.124 - 4.148: 95.4844% ( 65) 00:14:08.954 4.148 - 4.172: 95.8366% ( 45) 00:14:08.954 4.172 - 4.196: 96.1575% ( 41) 00:14:08.954 4.196 - 4.219: 96.4627% ( 39) 00:14:08.954 4.219 - 4.243: 96.6896% ( 29) 00:14:08.954 4.243 - 4.267: 96.8461% ( 20) 00:14:08.954 4.267 - 4.290: 96.9714% ( 16) 00:14:08.955 4.290 - 4.314: 97.0653% ( 12) 00:14:08.955 4.314 - 4.338: 97.1592% ( 12) 00:14:08.955 4.338 - 4.361: 97.2374% ( 10) 00:14:08.955 4.361 - 4.385: 97.2922% ( 7) 00:14:08.955 4.385 - 4.409: 97.3470% ( 7) 00:14:08.955 4.409 - 4.433: 97.3627% ( 2) 00:14:08.955 4.433 - 4.456: 97.4018% ( 5) 00:14:08.955 4.456 - 4.480: 97.4253% ( 3) 00:14:08.955 4.480 - 4.504: 97.4487% ( 3) 00:14:08.955 4.504 - 4.527: 97.4644% ( 2) 00:14:08.955 4.527 - 4.551: 97.4722% ( 1) 00:14:08.955 4.551 - 4.575: 97.4879% ( 2) 00:14:08.955 4.575 - 4.599: 97.4957% ( 1) 00:14:08.955 4.646 - 4.670: 97.5035% ( 1) 00:14:08.955 4.788 - 4.812: 97.5113% ( 1) 00:14:08.955 4.812 - 4.836: 97.5427% ( 4) 00:14:08.955 4.836 - 4.859: 97.5661% ( 3) 00:14:08.955 4.859 - 4.883: 97.5896% ( 3) 00:14:08.955 4.883 - 4.907: 97.6366% ( 6) 00:14:08.955 4.907 - 4.930: 97.7226% ( 11) 00:14:08.955 4.930 - 4.954: 97.7696% ( 6) 00:14:08.955 4.954 - 4.978: 97.8166% ( 6) 00:14:08.955 4.978 - 5.001: 97.8635% ( 6) 00:14:08.955 5.001 - 5.025: 97.8792% ( 2) 00:14:08.955 5.025 - 5.049: 97.9026% ( 3) 00:14:08.955 5.049 - 5.073: 97.9809% ( 10) 00:14:08.955 5.073 - 5.096: 98.0357% ( 7) 00:14:08.955 5.096 - 5.120: 98.0670% ( 4) 00:14:08.955 5.120 - 5.144: 98.1374% ( 9) 00:14:08.955 5.144 - 5.167: 98.1844% ( 6) 00:14:08.955 5.167 - 5.191: 98.2313% ( 6) 00:14:08.955 5.191 - 5.215: 98.2548% ( 3) 00:14:08.955 5.215 - 5.239: 98.2705% ( 2) 00:14:08.955 5.239 - 5.262: 98.2939% ( 3) 00:14:08.955 5.262 - 5.286: 98.3252% ( 4) 00:14:08.955 5.310 - 5.333: 98.3331% ( 1) 00:14:08.955 5.333 - 5.357: 98.3487% ( 2) 00:14:08.955 5.357 - 5.381: 98.3566% ( 1) 00:14:08.955 5.404 - 5.428: 98.3644% ( 1) 00:14:08.955 5.452 - 5.476: 98.3800% ( 2) 00:14:08.955 5.499 - 5.523: 98.3879% ( 1) 00:14:08.955 5.523 - 5.547: 98.4035% ( 2) 00:14:08.955 5.547 - 5.570: 98.4113% ( 1) 00:14:08.955 5.570 - 5.594: 98.4192% ( 1) 00:14:08.955 5.784 - 5.807: 98.4270% ( 1) 00:14:08.955 6.068 - 6.116: 98.4348% ( 1) 00:14:08.955 6.258 - 6.305: 98.4426% ( 1) 00:14:08.955 6.353 - 6.400: 98.4505% ( 1) 00:14:08.955 6.447 - 6.495: 98.4583% ( 1) 00:14:08.955 6.637 - 6.684: 98.4661% ( 1) 00:14:08.955 6.684 - 6.732: 98.4739% ( 1) 00:14:08.955 6.827 - 6.874: 98.4818% ( 1) 00:14:08.955 6.874 - 6.921: 98.4896% ( 1) 00:14:08.955 7.016 - 7.064: 98.4974% ( 1) 00:14:08.955 7.111 - 7.159: 98.5052% ( 1) 00:14:08.955 7.396 - 7.443: 98.5131% ( 1) 00:14:08.955 7.680 - 7.727: 98.5209% ( 1) 00:14:08.955 7.727 - 7.775: 98.5365% ( 2) 00:14:08.955 7.775 - 7.822: 98.5522% ( 2) 00:14:08.955 7.870 - 7.917: 98.5600% ( 1) 00:14:08.955 7.917 - 7.964: 98.5679% ( 1) 00:14:08.955 7.964 - 8.012: 98.5835% ( 2) 00:14:08.955 8.059 - 8.107: 98.5913% ( 1) 00:14:08.955 8.154 - 8.201: 98.6070% ( 2) 00:14:08.955 8.201 - 8.249: 98.6226% ( 2) 00:14:08.955 8.249 - 8.296: 98.6305% ( 1) 00:14:08.955 8.344 - 8.391: 98.6461% ( 2) 00:14:08.955 8.391 - 8.439: 98.6539% ( 1) 00:14:08.955 8.486 - 8.533: 98.6774% ( 3) 00:14:08.955 8.676 - 8.723: 98.6852% ( 1) 00:14:08.955 8.723 - 8.770: 98.7087% ( 3) 00:14:08.955 8.865 - 8.913: 98.7165% ( 1) 00:14:08.955 9.007 - 9.055: 98.7244% ( 1) 00:14:08.955 9.055 - 9.102: 98.7557% ( 4) 00:14:08.955 9.102 - 9.150: 98.7635% ( 1) 00:14:08.955 9.244 - 9.292: 98.7713% ( 1) 00:14:08.955 9.529 - 9.576: 98.7948% ( 3) 00:14:08.955 9.576 - 9.624: 98.8105% ( 2) 00:14:08.955 9.813 - 9.861: 98.8183% ( 1) 00:14:08.955 10.050 - 10.098: 98.8261% ( 1) 00:14:08.955 10.145 - 10.193: 98.8339% ( 1) 00:14:08.955 10.193 - 10.240: 98.8418% ( 1) 00:14:08.955 10.430 - 10.477: 98.8496% ( 1) 00:14:08.955 10.951 - 10.999: 98.8574% ( 1) 00:14:08.955 10.999 - 11.046: 98.8652% ( 1) 00:14:08.955 11.520 - 11.567: 98.8731% ( 1) 00:14:08.955 11.757 - 11.804: 98.8887% ( 2) 00:14:08.955 12.800 - 12.895: 98.8965% ( 1) 00:14:08.955 13.179 - 13.274: 98.9044% ( 1) 00:14:08.955 13.369 - 13.464: 98.9122% ( 1) 00:14:08.955 13.559 - 13.653: 98.9278% ( 2) 00:14:08.955 13.748 - 13.843: 98.9357% ( 1) 00:14:08.955 14.412 - 14.507: 98.9435% ( 1) 00:14:08.955 14.601 - 14.696: 98.9513% ( 1) 00:14:08.955 15.360 - 15.455: 98.9591% ( 1) 00:14:08.955 15.455 - 15.550: 98.9670% ( 1) 00:14:08.955 17.067 - 17.161: 98.9748% ( 1) 00:14:08.955 17.161 - 17.256: 98.9826% ( 1) 00:14:08.955 17.256 - 17.351: 98.9905% ( 1) 00:14:08.955 17.351 - 17.446: 99.0218% ( 4) 00:14:08.955 17.446 - 17.541: 99.0452% ( 3) 00:14:08.955 17.541 - 17.636: 99.0765% ( 4) 00:14:08.955 17.636 - 17.730: 99.1391% ( 8) 00:14:08.955 17.730 - 17.825: 99.1704% ( 4) 00:14:08.955 17.825 - 17.920: 99.1939% ( 3) 00:14:08.955 17.920 - 18.015: 99.2565% ( 8) 00:14:08.955 18.015 - 18.110: 99.3035% ( 6) 00:14:08.955 18.110 - 18.204: 99.3974% ( 12) 00:14:08.955 18.204 - 18.299: 99.5070% ( 14) 00:14:08.955 18.299 - 18.394: 99.5461% ( 5) 00:14:08.955 18.394 - 18.489: 99.6009% ( 7) 00:14:08.955 18.489 - 18.584: 99.6400% ( 5) 00:14:08.955 18.584 - 18.679: 99.6713% ( 4) 00:14:08.955 18.679 - 18.773: 99.7026% ( 4) 00:14:08.955 18.773 - 18.868: 99.7339% ( 4) 00:14:08.955 18.868 - 18.963: 99.7496% ( 2) 00:14:08.955 18.963 - 19.058: 99.7809% ( 4) 00:14:08.955 19.058 - 19.153: 99.7965% ( 2) 00:14:08.955 19.247 - 19.342: 99.8122% ( 2) 00:14:08.955 19.342 - 19.437: 99.8357% ( 3) 00:14:08.955 19.437 - 19.532: 99.8435% ( 1) 00:14:08.955 19.532 - 19.627: 99.8513% ( 1) 00:14:08.955 20.859 - 20.954: 99.8591% ( 1) 00:14:08.955 21.144 - 21.239: 99.8670% ( 1) 00:14:08.955 22.661 - 22.756: 99.8748% ( 1) 00:14:08.955 23.230 - 23.324: 99.8826% ( 1) 00:14:08.955 23.988 - 24.083: 99.8904% ( 1) 00:14:08.955 24.652 - 24.841: 99.8983% ( 1) 00:14:08.955 24.841 - 25.031: 99.9139% ( 2) 00:14:08.955 26.548 - 26.738: 99.9217% ( 1) 00:14:08.955 27.307 - 27.496: 99.9296% ( 1) 00:14:08.955 27.496 - 27.686: 99.9374% ( 1) 00:14:08.955 28.065 - 28.255: 99.9452% ( 1) 00:14:08.955 29.013 - 29.203: 99.9530% ( 1) 00:14:08.955 3980.705 - 4004.978: 100.0000% ( 6) 00:14:08.955 00:14:08.955 Complete histogram 00:14:08.955 ================== 00:14:08.955 Range in us Cumulative Count 00:14:08.955 2.062 - 2.074: 1.4243% ( 182) 00:14:08.955 2.074 - 2.086: 36.1089% ( 4432) 00:14:08.955 2.086 - 2.098: 53.5921% ( 2234) 00:14:08.955 2.098 - 2.110: 55.8538% ( 289) 00:14:08.955 2.110 - 2.121: 59.8529% ( 511) 00:14:08.955 2.121 - 2.133: 62.0207% ( 277) 00:14:08.955 2.133 - 2.145: 65.9728% ( 505) 00:14:08.955 2.145 - 2.157: 76.6552% ( 1365) 00:14:08.955 2.157 - 2.169: 79.8873% ( 413) 00:14:08.955 2.169 - 2.181: 80.7795% ( 114) 00:14:08.955 2.181 - 2.193: 82.2038% ( 182) 00:14:08.955 2.193 - 2.204: 82.8612% ( 84) 00:14:08.955 2.204 - 2.216: 84.1916% ( 170) 00:14:08.955 2.216 - 2.228: 88.3941% ( 537) 00:14:08.955 2.228 - 2.240: 90.0610% ( 213) 00:14:08.955 2.240 - 2.252: 92.2054% ( 274) 00:14:08.955 2.252 - 2.264: 93.2619% ( 135) 00:14:08.955 2.264 - 2.276: 93.6453% ( 49) 00:14:08.955 2.276 - 2.287: 93.8958% ( 32) 00:14:08.955 2.287 - 2.299: 94.2401% ( 44) 00:14:08.955 2.299 - 2.311: 94.4984% ( 33) 00:14:08.955 2.311 - 2.323: 95.1792% ( 87) 00:14:08.955 2.323 - 2.335: 95.4688% ( 37) 00:14:08.955 2.335 - 2.347: 95.5705% ( 13) 00:14:08.955 2.347 - 2.359: 95.6488% ( 10) 00:14:08.955 2.359 - 2.370: 95.7270% ( 10) 00:14:08.955 2.370 - 2.382: 95.7975% ( 9) 00:14:08.955 2.382 - 2.394: 95.8835% ( 11) 00:14:08.955 2.394 - 2.406: 96.1262% ( 31) 00:14:08.955 2.406 - 2.418: 96.2983% ( 22) 00:14:08.955 2.418 - 2.430: 96.3922% ( 12) 00:14:08.955 2.430 - 2.441: 96.5488% ( 20) 00:14:08.955 2.441 - 2.453: 96.6740% ( 16) 00:14:08.955 2.453 - 2.465: 96.9009% ( 29) 00:14:08.955 2.465 - 2.477: 97.1044% ( 26) 00:14:08.955 2.477 - 2.489: 97.3079% ( 26) 00:14:08.955 2.489 - 2.501: 97.4644% ( 20) 00:14:08.955 2.501 - 2.513: 97.7148% ( 32) 00:14:08.955 2.513 - 2.524: 97.8870% ( 22) 00:14:08.955 2.524 - 2.536: 98.0122% ( 16) 00:14:08.955 2.536 - 2.548: 98.0826% ( 9) 00:14:08.955 2.548 - 2.560: 98.1844% ( 13) 00:14:08.955 2.560 - 2.572: 98.2705% ( 11) 00:14:08.955 2.572 - 2.584: 98.3409% ( 9) 00:14:08.955 2.584 - 2.596: 98.3800% ( 5) 00:14:08.955 2.596 - 2.607: 98.4035% ( 3) 00:14:08.955 2.607 - 2.619: 98.4270% ( 3) 00:14:08.955 2.619 - 2.631: 98.4426% ( 2) 00:14:08.955 2.631 - 2.643: 98.4661% ( 3) 00:14:08.955 2.667 - 2.679: 98.4974% ( 4) 00:14:08.955 2.750 - 2.761: 98.5052% ( 1) 00:14:08.955 2.773 - 2.785: 98.5131% ( 1) 00:14:08.955 2.785 - 2.797: 98.5209% ( 1) 00:14:08.955 2.797 - 2.809: 98.5287% ( 1) 00:14:08.955 3.556 - 3.579: 98.5365% ( 1) 00:14:08.955 3.603 - 3.627: 98.5522% ( 2) 00:14:08.955 3.627 - 3.650: 98.5600% ( 1) 00:14:08.955 3.721 - 3.745: 98.5679% ( 1) 00:14:08.955 3.769 - 3.793: 98.5757% ( 1) 00:14:08.955 3.793 - 3.816: 98.5913% ( 2) 00:14:08.955 3.816 - 3.840: 98.5992% ( 1) 00:14:08.955 3.840 - 3.864: 98.6070% ( 1) 00:14:08.955 3.887 - 3.911: 98.6148% ( 1) 00:14:08.955 3.982 - 4.006: 98.6226% ( 1) 00:14:08.955 4.030 - 4.053: 98.6305% ( 1) 00:14:08.955 4.101 - 4.124: 98.6383% ( 1) 00:14:08.955 4.124 - 4.148: 98.6461% ( 1) 00:14:08.955 4.172 - 4.196: 98.6539% ( 1) 00:14:08.955 4.196 - 4.219: 98.6618% ( 1) 00:14:08.955 4.219 - 4.243: 98.6696% ( 1) 00:14:08.955 4.243 - 4.267: 98.6774% ( 1) 00:14:08.955 4.409 - 4.433: 98.6852% ( 1) 00:14:08.955 4.433 - 4.456: 98.6931% ( 1) 00:14:08.955 5.665 - 5.689: 98.7009% ( 1) 00:14:08.955 5.689 - 5.713: 98.7087% ( 1) 00:14:08.955 6.210 - 6.258: 98.7165% ( 1) 00:14:08.955 6.353 - 6.400: 98.7244% ( 1) 00:14:08.955 6.447 - 6.495: 98.7322% ( 1) 00:14:08.955 6.732 - 6.779: 98.7400% ( 1) 00:14:08.955 6.827 - 6.874: 98.7478% ( 1) 00:14:08.955 6.874 - 6.921: 98.7557% ( 1) 00:14:08.955 6.921 - 6.969: 98.7635% ( 1) 00:14:08.956 7.111 - 7.159: 9[2024-10-17 16:42:22.555807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:08.956 8.7713% ( 1) 00:14:08.956 7.206 - 7.253: 98.7792% ( 1) 00:14:08.956 7.253 - 7.301: 98.7870% ( 1) 00:14:08.956 7.301 - 7.348: 98.8026% ( 2) 00:14:08.956 7.633 - 7.680: 98.8105% ( 1) 00:14:08.956 7.917 - 7.964: 98.8183% ( 1) 00:14:08.956 8.439 - 8.486: 98.8261% ( 1) 00:14:08.956 10.145 - 10.193: 98.8339% ( 1) 00:14:08.956 12.610 - 12.705: 98.8418% ( 1) 00:14:08.956 15.360 - 15.455: 98.8496% ( 1) 00:14:08.956 15.455 - 15.550: 98.8652% ( 2) 00:14:08.956 15.644 - 15.739: 98.8731% ( 1) 00:14:08.956 15.834 - 15.929: 98.8887% ( 2) 00:14:08.956 15.929 - 16.024: 98.9044% ( 2) 00:14:08.956 16.024 - 16.119: 98.9278% ( 3) 00:14:08.956 16.119 - 16.213: 98.9357% ( 1) 00:14:08.956 16.213 - 16.308: 98.9826% ( 6) 00:14:08.956 16.308 - 16.403: 99.0139% ( 4) 00:14:08.956 16.403 - 16.498: 99.0218% ( 1) 00:14:08.956 16.498 - 16.593: 99.0452% ( 3) 00:14:08.956 16.593 - 16.687: 99.0765% ( 4) 00:14:08.956 16.687 - 16.782: 99.1000% ( 3) 00:14:08.956 16.782 - 16.877: 99.1157% ( 2) 00:14:08.956 16.877 - 16.972: 99.1391% ( 3) 00:14:08.956 16.972 - 17.067: 99.1783% ( 5) 00:14:08.956 17.067 - 17.161: 99.2018% ( 3) 00:14:08.956 17.161 - 17.256: 99.2174% ( 2) 00:14:08.956 17.256 - 17.351: 99.2409% ( 3) 00:14:08.956 17.351 - 17.446: 99.2487% ( 1) 00:14:08.956 17.636 - 17.730: 99.2565% ( 1) 00:14:08.956 17.730 - 17.825: 99.2722% ( 2) 00:14:08.956 18.110 - 18.204: 99.2800% ( 1) 00:14:08.956 18.204 - 18.299: 99.2878% ( 1) 00:14:08.956 18.394 - 18.489: 99.3035% ( 2) 00:14:08.956 18.489 - 18.584: 99.3113% ( 1) 00:14:08.956 21.333 - 21.428: 99.3191% ( 1) 00:14:08.956 3980.705 - 4004.978: 99.8044% ( 62) 00:14:08.956 4004.978 - 4029.250: 99.9765% ( 22) 00:14:08.956 7961.410 - 8009.956: 100.0000% ( 3) 00:14:08.956 00:14:08.956 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:08.956 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:08.956 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:08.956 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:08.956 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:09.523 [ 00:14:09.523 { 00:14:09.523 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:09.523 "subtype": "Discovery", 00:14:09.523 "listen_addresses": [], 00:14:09.523 "allow_any_host": true, 00:14:09.523 "hosts": [] 00:14:09.523 }, 00:14:09.523 { 00:14:09.523 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:09.523 "subtype": "NVMe", 00:14:09.523 "listen_addresses": [ 00:14:09.523 { 00:14:09.523 "trtype": "VFIOUSER", 00:14:09.523 "adrfam": "IPv4", 00:14:09.523 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:09.523 "trsvcid": "0" 00:14:09.524 } 00:14:09.524 ], 00:14:09.524 "allow_any_host": true, 00:14:09.524 "hosts": [], 00:14:09.524 "serial_number": "SPDK1", 00:14:09.524 "model_number": "SPDK bdev Controller", 00:14:09.524 "max_namespaces": 32, 00:14:09.524 "min_cntlid": 1, 00:14:09.524 "max_cntlid": 65519, 00:14:09.524 "namespaces": [ 00:14:09.524 { 00:14:09.524 "nsid": 1, 00:14:09.524 "bdev_name": "Malloc1", 00:14:09.524 "name": "Malloc1", 00:14:09.524 "nguid": "C6FF40DF0C1C4C058FDF3D6FE0C70C14", 00:14:09.524 "uuid": "c6ff40df-0c1c-4c05-8fdf-3d6fe0c70c14" 00:14:09.524 }, 00:14:09.524 { 00:14:09.524 "nsid": 2, 00:14:09.524 "bdev_name": "Malloc3", 00:14:09.524 "name": "Malloc3", 00:14:09.524 "nguid": "9BB745019BB24725BE0063F08CD9CC37", 00:14:09.524 "uuid": "9bb74501-9bb2-4725-be00-63f08cd9cc37" 00:14:09.524 } 00:14:09.524 ] 00:14:09.524 }, 00:14:09.524 { 00:14:09.524 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:09.524 "subtype": "NVMe", 00:14:09.524 "listen_addresses": [ 00:14:09.524 { 00:14:09.524 "trtype": "VFIOUSER", 00:14:09.524 "adrfam": "IPv4", 00:14:09.524 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:09.524 "trsvcid": "0" 00:14:09.524 } 00:14:09.524 ], 00:14:09.524 "allow_any_host": true, 00:14:09.524 "hosts": [], 00:14:09.524 "serial_number": "SPDK2", 00:14:09.524 "model_number": "SPDK bdev Controller", 00:14:09.524 "max_namespaces": 32, 00:14:09.524 "min_cntlid": 1, 00:14:09.524 "max_cntlid": 65519, 00:14:09.524 "namespaces": [ 00:14:09.524 { 00:14:09.524 "nsid": 1, 00:14:09.524 "bdev_name": "Malloc2", 00:14:09.524 "name": "Malloc2", 00:14:09.524 "nguid": "92A061773EE742B18510F1A51F2FB0F4", 00:14:09.524 "uuid": "92a06177-3ee7-42b1-8510-f1a51f2fb0f4" 00:14:09.524 } 00:14:09.524 ] 00:14:09.524 } 00:14:09.524 ] 00:14:09.524 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:09.524 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2332562 00:14:09.524 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:09.524 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:09.524 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:09.524 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:09.524 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:09.524 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:09.524 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:09.524 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:09.524 [2024-10-17 16:42:23.072950] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:09.782 Malloc4 00:14:09.782 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:10.040 [2024-10-17 16:42:23.498386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.040 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:10.040 Asynchronous Event Request test 00:14:10.040 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.040 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.040 Registering asynchronous event callbacks... 00:14:10.040 Starting namespace attribute notice tests for all controllers... 00:14:10.040 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:10.040 aer_cb - Changed Namespace 00:14:10.040 Cleaning up... 00:14:10.299 [ 00:14:10.299 { 00:14:10.299 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:10.299 "subtype": "Discovery", 00:14:10.299 "listen_addresses": [], 00:14:10.299 "allow_any_host": true, 00:14:10.299 "hosts": [] 00:14:10.299 }, 00:14:10.299 { 00:14:10.299 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:10.299 "subtype": "NVMe", 00:14:10.299 "listen_addresses": [ 00:14:10.299 { 00:14:10.299 "trtype": "VFIOUSER", 00:14:10.299 "adrfam": "IPv4", 00:14:10.299 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:10.299 "trsvcid": "0" 00:14:10.299 } 00:14:10.299 ], 00:14:10.299 "allow_any_host": true, 00:14:10.299 "hosts": [], 00:14:10.299 "serial_number": "SPDK1", 00:14:10.299 "model_number": "SPDK bdev Controller", 00:14:10.299 "max_namespaces": 32, 00:14:10.299 "min_cntlid": 1, 00:14:10.299 "max_cntlid": 65519, 00:14:10.299 "namespaces": [ 00:14:10.299 { 00:14:10.299 "nsid": 1, 00:14:10.299 "bdev_name": "Malloc1", 00:14:10.299 "name": "Malloc1", 00:14:10.299 "nguid": "C6FF40DF0C1C4C058FDF3D6FE0C70C14", 00:14:10.299 "uuid": "c6ff40df-0c1c-4c05-8fdf-3d6fe0c70c14" 00:14:10.299 }, 00:14:10.299 { 00:14:10.299 "nsid": 2, 00:14:10.299 "bdev_name": "Malloc3", 00:14:10.299 "name": "Malloc3", 00:14:10.299 "nguid": "9BB745019BB24725BE0063F08CD9CC37", 00:14:10.299 "uuid": "9bb74501-9bb2-4725-be00-63f08cd9cc37" 00:14:10.299 } 00:14:10.299 ] 00:14:10.299 }, 00:14:10.299 { 00:14:10.299 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:10.299 "subtype": "NVMe", 00:14:10.299 "listen_addresses": [ 00:14:10.299 { 00:14:10.299 "trtype": "VFIOUSER", 00:14:10.299 "adrfam": "IPv4", 00:14:10.299 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:10.299 "trsvcid": "0" 00:14:10.299 } 00:14:10.299 ], 00:14:10.299 "allow_any_host": true, 00:14:10.299 "hosts": [], 00:14:10.299 "serial_number": "SPDK2", 00:14:10.299 "model_number": "SPDK bdev Controller", 00:14:10.299 "max_namespaces": 32, 00:14:10.299 "min_cntlid": 1, 00:14:10.299 "max_cntlid": 65519, 00:14:10.299 "namespaces": [ 00:14:10.299 { 00:14:10.299 "nsid": 1, 00:14:10.299 "bdev_name": "Malloc2", 00:14:10.299 "name": "Malloc2", 00:14:10.299 "nguid": "92A061773EE742B18510F1A51F2FB0F4", 00:14:10.299 "uuid": "92a06177-3ee7-42b1-8510-f1a51f2fb0f4" 00:14:10.299 }, 00:14:10.299 { 00:14:10.299 "nsid": 2, 00:14:10.299 "bdev_name": "Malloc4", 00:14:10.299 "name": "Malloc4", 00:14:10.299 "nguid": "0D3FC4D3A7E54CC7AD7C6E17CDECE5DB", 00:14:10.299 "uuid": "0d3fc4d3-a7e5-4cc7-ad7c-6e17cdece5db" 00:14:10.299 } 00:14:10.299 ] 00:14:10.299 } 00:14:10.299 ] 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2332562 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2326336 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2326336 ']' 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2326336 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2326336 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2326336' 00:14:10.299 killing process with pid 2326336 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2326336 00:14:10.299 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2326336 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2332710 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2332710' 00:14:10.558 Process pid: 2332710 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2332710 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2332710 ']' 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.558 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:10.558 [2024-10-17 16:42:24.224472] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:10.558 [2024-10-17 16:42:24.225536] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:14:10.558 [2024-10-17 16:42:24.225608] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.817 [2024-10-17 16:42:24.288492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.817 [2024-10-17 16:42:24.349409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.817 [2024-10-17 16:42:24.349472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.817 [2024-10-17 16:42:24.349488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.817 [2024-10-17 16:42:24.349501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.817 [2024-10-17 16:42:24.349513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.817 [2024-10-17 16:42:24.351133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.817 [2024-10-17 16:42:24.351199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.817 [2024-10-17 16:42:24.351297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.817 [2024-10-17 16:42:24.351299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.817 [2024-10-17 16:42:24.443334] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:10.817 [2024-10-17 16:42:24.443580] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:10.817 [2024-10-17 16:42:24.443919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:10.817 [2024-10-17 16:42:24.444518] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:10.817 [2024-10-17 16:42:24.444774] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:10.817 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.817 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:10.817 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:12.197 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:12.197 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:12.197 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:12.197 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:12.197 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:12.197 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:12.457 Malloc1 00:14:12.457 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:12.718 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:12.977 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:13.543 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:13.543 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:13.543 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:13.800 Malloc2 00:14:13.800 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:14.058 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:14.316 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:14.574 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:14.575 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2332710 00:14:14.575 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2332710 ']' 00:14:14.575 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2332710 00:14:14.575 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:14:14.575 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.575 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2332710 00:14:14.575 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:14.575 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:14.575 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2332710' 00:14:14.575 killing process with pid 2332710 00:14:14.575 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2332710 00:14:14.575 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2332710 00:14:14.834 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:14.834 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:14.834 00:14:14.834 real 0m53.925s 00:14:14.834 user 3m28.690s 00:14:14.834 sys 0m3.980s 00:14:14.834 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:14.834 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:14.834 ************************************ 00:14:14.834 END TEST nvmf_vfio_user 00:14:14.834 ************************************ 00:14:14.834 16:42:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:14.834 16:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:14.834 16:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:14.834 16:42:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:14.834 ************************************ 00:14:14.834 START TEST nvmf_vfio_user_nvme_compliance 00:14:14.834 ************************************ 00:14:14.834 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:14.834 * Looking for test storage... 00:14:14.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:14.834 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:14.834 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:14:14.834 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.093 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:15.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.093 --rc genhtml_branch_coverage=1 00:14:15.094 --rc genhtml_function_coverage=1 00:14:15.094 --rc genhtml_legend=1 00:14:15.094 --rc geninfo_all_blocks=1 00:14:15.094 --rc geninfo_unexecuted_blocks=1 00:14:15.094 00:14:15.094 ' 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:15.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.094 --rc genhtml_branch_coverage=1 00:14:15.094 --rc genhtml_function_coverage=1 00:14:15.094 --rc genhtml_legend=1 00:14:15.094 --rc geninfo_all_blocks=1 00:14:15.094 --rc geninfo_unexecuted_blocks=1 00:14:15.094 00:14:15.094 ' 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:15.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.094 --rc genhtml_branch_coverage=1 00:14:15.094 --rc genhtml_function_coverage=1 00:14:15.094 --rc genhtml_legend=1 00:14:15.094 --rc geninfo_all_blocks=1 00:14:15.094 --rc geninfo_unexecuted_blocks=1 00:14:15.094 00:14:15.094 ' 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:15.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.094 --rc genhtml_branch_coverage=1 00:14:15.094 --rc genhtml_function_coverage=1 00:14:15.094 --rc genhtml_legend=1 00:14:15.094 --rc geninfo_all_blocks=1 00:14:15.094 --rc geninfo_unexecuted_blocks=1 00:14:15.094 00:14:15.094 ' 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:15.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2333316 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2333316' 00:14:15.094 Process pid: 2333316 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2333316 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2333316 ']' 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.094 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.094 [2024-10-17 16:42:28.652935] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:14:15.094 [2024-10-17 16:42:28.653038] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.094 [2024-10-17 16:42:28.710672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:15.094 [2024-10-17 16:42:28.771675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.094 [2024-10-17 16:42:28.771740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.094 [2024-10-17 16:42:28.771756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.094 [2024-10-17 16:42:28.771771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.094 [2024-10-17 16:42:28.771784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.094 [2024-10-17 16:42:28.773263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.094 [2024-10-17 16:42:28.773340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.094 [2024-10-17 16:42:28.773336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.354 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.354 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:14:15.354 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:16.292 malloc0 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.292 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:16.552 00:14:16.552 00:14:16.552 CUnit - A unit testing framework for C - Version 2.1-3 00:14:16.552 http://cunit.sourceforge.net/ 00:14:16.552 00:14:16.552 00:14:16.552 Suite: nvme_compliance 00:14:16.552 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-17 16:42:30.125573] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.552 [2024-10-17 16:42:30.127140] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:16.552 [2024-10-17 16:42:30.127167] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:16.552 [2024-10-17 16:42:30.127180] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:16.552 [2024-10-17 16:42:30.131616] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.552 passed 00:14:16.552 Test: admin_identify_ctrlr_verify_fused ...[2024-10-17 16:42:30.221265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.552 [2024-10-17 16:42:30.224291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.811 passed 00:14:16.811 Test: admin_identify_ns ...[2024-10-17 16:42:30.313306] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.811 [2024-10-17 16:42:30.377018] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:16.811 [2024-10-17 16:42:30.385016] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:16.811 [2024-10-17 16:42:30.405248] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.811 passed 00:14:16.811 Test: admin_get_features_mandatory_features ...[2024-10-17 16:42:30.488281] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.811 [2024-10-17 16:42:30.491322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.070 passed 00:14:17.070 Test: admin_get_features_optional_features ...[2024-10-17 16:42:30.577853] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.070 [2024-10-17 16:42:30.580898] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.070 passed 00:14:17.070 Test: admin_set_features_number_of_queues ...[2024-10-17 16:42:30.668483] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.339 [2024-10-17 16:42:30.773256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.339 passed 00:14:17.339 Test: admin_get_log_page_mandatory_logs ...[2024-10-17 16:42:30.858406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.339 [2024-10-17 16:42:30.861431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.339 passed 00:14:17.339 Test: admin_get_log_page_with_lpo ...[2024-10-17 16:42:30.943481] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.339 [2024-10-17 16:42:31.011032] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:17.339 [2024-10-17 16:42:31.024107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.630 passed 00:14:17.630 Test: fabric_property_get ...[2024-10-17 16:42:31.108500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.630 [2024-10-17 16:42:31.109785] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:17.630 [2024-10-17 16:42:31.111529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.630 passed 00:14:17.630 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-17 16:42:31.196086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.630 [2024-10-17 16:42:31.197396] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:17.630 [2024-10-17 16:42:31.199102] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.630 passed 00:14:17.630 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-17 16:42:31.283663] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.891 [2024-10-17 16:42:31.375021] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:17.891 [2024-10-17 16:42:31.391013] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:17.891 [2024-10-17 16:42:31.400122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.891 passed 00:14:17.891 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-17 16:42:31.479589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.891 [2024-10-17 16:42:31.480906] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:17.891 [2024-10-17 16:42:31.482609] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.891 passed 00:14:17.891 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-17 16:42:31.571142] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.149 [2024-10-17 16:42:31.647024] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:18.149 [2024-10-17 16:42:31.671009] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:18.149 [2024-10-17 16:42:31.676124] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.149 passed 00:14:18.149 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-17 16:42:31.763199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.149 [2024-10-17 16:42:31.764534] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:18.149 [2024-10-17 16:42:31.764586] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:18.149 [2024-10-17 16:42:31.766223] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.149 passed 00:14:18.409 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-17 16:42:31.851815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.409 [2024-10-17 16:42:31.943008] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:18.409 [2024-10-17 16:42:31.951013] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:18.409 [2024-10-17 16:42:31.959011] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:18.409 [2024-10-17 16:42:31.967013] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:18.409 [2024-10-17 16:42:31.996104] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.409 passed 00:14:18.409 Test: admin_create_io_sq_verify_pc ...[2024-10-17 16:42:32.081649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.409 [2024-10-17 16:42:32.097034] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:18.668 [2024-10-17 16:42:32.115107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.668 passed 00:14:18.668 Test: admin_create_io_qp_max_qps ...[2024-10-17 16:42:32.201685] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:20.046 [2024-10-17 16:42:33.312020] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:20.046 [2024-10-17 16:42:33.700463] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:20.046 passed 00:14:20.305 Test: admin_create_io_sq_shared_cq ...[2024-10-17 16:42:33.785505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:20.305 [2024-10-17 16:42:33.917009] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:20.305 [2024-10-17 16:42:33.954092] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:20.305 passed 00:14:20.305 00:14:20.305 Run Summary: Type Total Ran Passed Failed Inactive 00:14:20.305 suites 1 1 n/a 0 0 00:14:20.305 tests 18 18 18 0 0 00:14:20.305 asserts 360 360 360 0 n/a 00:14:20.305 00:14:20.305 Elapsed time = 1.587 seconds 00:14:20.566 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2333316 00:14:20.566 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2333316 ']' 00:14:20.566 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2333316 00:14:20.566 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:14:20.566 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.566 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2333316 00:14:20.566 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:20.566 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:20.566 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2333316' 00:14:20.566 killing process with pid 2333316 00:14:20.566 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2333316 00:14:20.566 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2333316 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:20.826 00:14:20.826 real 0m5.869s 00:14:20.826 user 0m16.420s 00:14:20.826 sys 0m0.546s 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:20.826 ************************************ 00:14:20.826 END TEST nvmf_vfio_user_nvme_compliance 00:14:20.826 ************************************ 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.826 ************************************ 00:14:20.826 START TEST nvmf_vfio_user_fuzz 00:14:20.826 ************************************ 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:20.826 * Looking for test storage... 00:14:20.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.826 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:20.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.827 --rc genhtml_branch_coverage=1 00:14:20.827 --rc genhtml_function_coverage=1 00:14:20.827 --rc genhtml_legend=1 00:14:20.827 --rc geninfo_all_blocks=1 00:14:20.827 --rc geninfo_unexecuted_blocks=1 00:14:20.827 00:14:20.827 ' 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:20.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.827 --rc genhtml_branch_coverage=1 00:14:20.827 --rc genhtml_function_coverage=1 00:14:20.827 --rc genhtml_legend=1 00:14:20.827 --rc geninfo_all_blocks=1 00:14:20.827 --rc geninfo_unexecuted_blocks=1 00:14:20.827 00:14:20.827 ' 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:20.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.827 --rc genhtml_branch_coverage=1 00:14:20.827 --rc genhtml_function_coverage=1 00:14:20.827 --rc genhtml_legend=1 00:14:20.827 --rc geninfo_all_blocks=1 00:14:20.827 --rc geninfo_unexecuted_blocks=1 00:14:20.827 00:14:20.827 ' 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:20.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.827 --rc genhtml_branch_coverage=1 00:14:20.827 --rc genhtml_function_coverage=1 00:14:20.827 --rc genhtml_legend=1 00:14:20.827 --rc geninfo_all_blocks=1 00:14:20.827 --rc geninfo_unexecuted_blocks=1 00:14:20.827 00:14:20.827 ' 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:20.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2334055 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2334055' 00:14:20.827 Process pid: 2334055 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:20.827 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2334055 00:14:20.828 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2334055 ']' 00:14:20.828 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.828 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.828 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.828 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.828 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:21.395 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.395 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:14:21.395 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.336 malloc0 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:22.336 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:54.503 Fuzzing completed. Shutting down the fuzz application 00:14:54.503 00:14:54.503 Dumping successful admin opcodes: 00:14:54.503 8, 9, 10, 24, 00:14:54.503 Dumping successful io opcodes: 00:14:54.503 0, 00:14:54.503 NS: 0x20000081ef00 I/O qp, Total commands completed: 604443, total successful commands: 2336, random_seed: 1823476480 00:14:54.503 NS: 0x20000081ef00 admin qp, Total commands completed: 131560, total successful commands: 1069, random_seed: 1940554880 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2334055 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2334055 ']' 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2334055 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2334055 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2334055' 00:14:54.503 killing process with pid 2334055 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2334055 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2334055 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:54.503 00:14:54.503 real 0m32.255s 00:14:54.503 user 0m32.255s 00:14:54.503 sys 0m28.945s 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:54.503 ************************************ 00:14:54.503 END TEST nvmf_vfio_user_fuzz 00:14:54.503 ************************************ 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:54.503 ************************************ 00:14:54.503 START TEST nvmf_auth_target 00:14:54.503 ************************************ 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:54.503 * Looking for test storage... 00:14:54.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:54.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.503 --rc genhtml_branch_coverage=1 00:14:54.503 --rc genhtml_function_coverage=1 00:14:54.503 --rc genhtml_legend=1 00:14:54.503 --rc geninfo_all_blocks=1 00:14:54.503 --rc geninfo_unexecuted_blocks=1 00:14:54.503 00:14:54.503 ' 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:54.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.503 --rc genhtml_branch_coverage=1 00:14:54.503 --rc genhtml_function_coverage=1 00:14:54.503 --rc genhtml_legend=1 00:14:54.503 --rc geninfo_all_blocks=1 00:14:54.503 --rc geninfo_unexecuted_blocks=1 00:14:54.503 00:14:54.503 ' 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:54.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.503 --rc genhtml_branch_coverage=1 00:14:54.503 --rc genhtml_function_coverage=1 00:14:54.503 --rc genhtml_legend=1 00:14:54.503 --rc geninfo_all_blocks=1 00:14:54.503 --rc geninfo_unexecuted_blocks=1 00:14:54.503 00:14:54.503 ' 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:54.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.503 --rc genhtml_branch_coverage=1 00:14:54.503 --rc genhtml_function_coverage=1 00:14:54.503 --rc genhtml_legend=1 00:14:54.503 --rc geninfo_all_blocks=1 00:14:54.503 --rc geninfo_unexecuted_blocks=1 00:14:54.503 00:14:54.503 ' 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.503 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:54.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:54.504 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:55.440 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:55.441 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:55.441 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:55.441 Found net devices under 0000:09:00.0: cvl_0_0 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:55.441 Found net devices under 0000:09:00.1: cvl_0_1 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:55.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:14:55.441 00:14:55.441 --- 10.0.0.2 ping statistics --- 00:14:55.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.441 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:55.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:14:55.441 00:14:55.441 --- 10.0.0.1 ping statistics --- 00:14:55.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.441 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=2339503 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 2339503 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2339503 ']' 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.441 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2339524 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=84efb8c8e0129861790710567a3c4dae66abf318494032b2 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.uI7 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 84efb8c8e0129861790710567a3c4dae66abf318494032b2 0 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 84efb8c8e0129861790710567a3c4dae66abf318494032b2 0 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=84efb8c8e0129861790710567a3c4dae66abf318494032b2 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.uI7 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.uI7 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.uI7 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5b90e0bed57d3307674cfab75897efb9869094a75eebe79f7e33dd2580d8a9d0 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.kSW 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5b90e0bed57d3307674cfab75897efb9869094a75eebe79f7e33dd2580d8a9d0 3 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5b90e0bed57d3307674cfab75897efb9869094a75eebe79f7e33dd2580d8a9d0 3 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5b90e0bed57d3307674cfab75897efb9869094a75eebe79f7e33dd2580d8a9d0 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.kSW 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.kSW 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.kSW 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=01b84bd8cd950d897fa411335c958f61 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.wgd 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 01b84bd8cd950d897fa411335c958f61 1 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 01b84bd8cd950d897fa411335c958f61 1 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=01b84bd8cd950d897fa411335c958f61 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:14:55.701 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.wgd 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.wgd 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.wgd 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=9898ac124d6b5de88bab9faec97ec728f77cf7ac4824fd68 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.P2Z 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 9898ac124d6b5de88bab9faec97ec728f77cf7ac4824fd68 2 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 9898ac124d6b5de88bab9faec97ec728f77cf7ac4824fd68 2 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=9898ac124d6b5de88bab9faec97ec728f77cf7ac4824fd68 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.P2Z 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.P2Z 00:14:55.960 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.P2Z 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5cc7e51ee4ff1a99cc3722b99e61543196c75cbf98484e80 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.LOE 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5cc7e51ee4ff1a99cc3722b99e61543196c75cbf98484e80 2 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5cc7e51ee4ff1a99cc3722b99e61543196c75cbf98484e80 2 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5cc7e51ee4ff1a99cc3722b99e61543196c75cbf98484e80 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.LOE 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.LOE 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.LOE 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=7729e2505af438fbf62ee918aaa836ca 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Ehf 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 7729e2505af438fbf62ee918aaa836ca 1 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 7729e2505af438fbf62ee918aaa836ca 1 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=7729e2505af438fbf62ee918aaa836ca 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Ehf 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Ehf 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Ehf 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d9c0753e54da6746b0d7f8bf3ae60c8803fc4b471ea38d226407c8b093dbb3b7 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.MT3 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d9c0753e54da6746b0d7f8bf3ae60c8803fc4b471ea38d226407c8b093dbb3b7 3 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d9c0753e54da6746b0d7f8bf3ae60c8803fc4b471ea38d226407c8b093dbb3b7 3 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d9c0753e54da6746b0d7f8bf3ae60c8803fc4b471ea38d226407c8b093dbb3b7 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.MT3 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.MT3 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.MT3 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2339503 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2339503 ']' 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.961 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.220 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:56.220 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:56.220 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2339524 /var/tmp/host.sock 00:14:56.220 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2339524 ']' 00:14:56.220 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:56.220 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:56.220 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:56.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:56.220 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:56.220 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uI7 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uI7 00:14:56.789 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uI7 00:14:57.049 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.kSW ]] 00:14:57.049 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kSW 00:14:57.049 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.049 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.049 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.049 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kSW 00:14:57.049 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kSW 00:14:57.308 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:57.308 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.wgd 00:14:57.308 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.308 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.308 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.308 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.wgd 00:14:57.308 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.wgd 00:14:57.566 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.P2Z ]] 00:14:57.566 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.P2Z 00:14:57.566 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.566 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.566 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.566 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.P2Z 00:14:57.566 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.P2Z 00:14:57.825 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:57.825 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.LOE 00:14:57.825 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.825 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.825 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.825 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.LOE 00:14:57.825 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.LOE 00:14:58.084 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Ehf ]] 00:14:58.084 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ehf 00:14:58.084 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.084 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.084 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.084 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ehf 00:14:58.084 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ehf 00:14:58.342 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:58.342 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.MT3 00:14:58.342 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.342 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.342 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.342 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.MT3 00:14:58.342 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.MT3 00:14:58.601 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:58.601 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:58.601 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:58.601 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.601 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:58.601 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:58.859 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:58.860 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.860 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:58.860 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:58.860 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:58.860 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.860 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.860 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.860 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.860 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.860 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.860 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.860 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.119 00:14:59.119 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.119 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.119 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.378 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.378 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.378 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.378 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.636 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.636 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.636 { 00:14:59.636 "cntlid": 1, 00:14:59.636 "qid": 0, 00:14:59.636 "state": "enabled", 00:14:59.636 "thread": "nvmf_tgt_poll_group_000", 00:14:59.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:59.636 "listen_address": { 00:14:59.636 "trtype": "TCP", 00:14:59.636 "adrfam": "IPv4", 00:14:59.636 "traddr": "10.0.0.2", 00:14:59.636 "trsvcid": "4420" 00:14:59.636 }, 00:14:59.636 "peer_address": { 00:14:59.636 "trtype": "TCP", 00:14:59.636 "adrfam": "IPv4", 00:14:59.636 "traddr": "10.0.0.1", 00:14:59.636 "trsvcid": "55934" 00:14:59.636 }, 00:14:59.636 "auth": { 00:14:59.636 "state": "completed", 00:14:59.636 "digest": "sha256", 00:14:59.636 "dhgroup": "null" 00:14:59.636 } 00:14:59.636 } 00:14:59.636 ]' 00:14:59.636 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.636 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.636 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.636 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:59.636 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.636 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.636 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.636 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.894 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:14:59.894 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:00.830 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.830 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:00.830 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.830 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.830 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.830 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.830 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:00.830 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.088 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.346 00:15:01.346 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.346 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.346 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.914 { 00:15:01.914 "cntlid": 3, 00:15:01.914 "qid": 0, 00:15:01.914 "state": "enabled", 00:15:01.914 "thread": "nvmf_tgt_poll_group_000", 00:15:01.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:01.914 "listen_address": { 00:15:01.914 "trtype": "TCP", 00:15:01.914 "adrfam": "IPv4", 00:15:01.914 "traddr": "10.0.0.2", 00:15:01.914 "trsvcid": "4420" 00:15:01.914 }, 00:15:01.914 "peer_address": { 00:15:01.914 "trtype": "TCP", 00:15:01.914 "adrfam": "IPv4", 00:15:01.914 "traddr": "10.0.0.1", 00:15:01.914 "trsvcid": "50584" 00:15:01.914 }, 00:15:01.914 "auth": { 00:15:01.914 "state": "completed", 00:15:01.914 "digest": "sha256", 00:15:01.914 "dhgroup": "null" 00:15:01.914 } 00:15:01.914 } 00:15:01.914 ]' 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.914 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.172 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:15:02.172 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:15:03.109 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.109 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:03.109 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.109 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.109 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.109 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.109 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.109 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.367 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.625 00:15:03.625 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.625 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.625 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.882 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.882 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.882 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.882 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.882 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.882 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.882 { 00:15:03.882 "cntlid": 5, 00:15:03.882 "qid": 0, 00:15:03.882 "state": "enabled", 00:15:03.882 "thread": "nvmf_tgt_poll_group_000", 00:15:03.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:03.882 "listen_address": { 00:15:03.882 "trtype": "TCP", 00:15:03.882 "adrfam": "IPv4", 00:15:03.882 "traddr": "10.0.0.2", 00:15:03.882 "trsvcid": "4420" 00:15:03.882 }, 00:15:03.882 "peer_address": { 00:15:03.882 "trtype": "TCP", 00:15:03.882 "adrfam": "IPv4", 00:15:03.882 "traddr": "10.0.0.1", 00:15:03.882 "trsvcid": "50612" 00:15:03.882 }, 00:15:03.882 "auth": { 00:15:03.882 "state": "completed", 00:15:03.882 "digest": "sha256", 00:15:03.882 "dhgroup": "null" 00:15:03.882 } 00:15:03.882 } 00:15:03.882 ]' 00:15:03.882 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.140 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.140 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.140 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:04.140 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.140 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.140 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.140 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.399 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:15:04.399 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:15:05.333 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.333 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:05.333 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.333 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.333 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.333 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.333 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.333 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.591 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:05.591 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.591 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.591 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:05.591 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:05.591 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.591 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:05.591 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.591 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.591 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.591 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:05.591 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.592 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.850 00:15:05.850 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.850 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.850 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.415 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.415 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.415 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.415 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.415 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.415 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.415 { 00:15:06.415 "cntlid": 7, 00:15:06.415 "qid": 0, 00:15:06.415 "state": "enabled", 00:15:06.415 "thread": "nvmf_tgt_poll_group_000", 00:15:06.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:06.415 "listen_address": { 00:15:06.415 "trtype": "TCP", 00:15:06.415 "adrfam": "IPv4", 00:15:06.415 "traddr": "10.0.0.2", 00:15:06.415 "trsvcid": "4420" 00:15:06.415 }, 00:15:06.415 "peer_address": { 00:15:06.415 "trtype": "TCP", 00:15:06.415 "adrfam": "IPv4", 00:15:06.415 "traddr": "10.0.0.1", 00:15:06.415 "trsvcid": "50632" 00:15:06.415 }, 00:15:06.415 "auth": { 00:15:06.415 "state": "completed", 00:15:06.415 "digest": "sha256", 00:15:06.415 "dhgroup": "null" 00:15:06.415 } 00:15:06.415 } 00:15:06.416 ]' 00:15:06.416 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.416 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.416 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.416 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:06.416 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.416 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.416 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.416 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.674 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:15:06.674 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:15:07.607 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.607 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:07.607 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.607 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.607 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.607 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.607 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.607 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:07.607 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.865 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.123 00:15:08.381 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.381 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.381 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.639 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.639 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.639 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.639 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.639 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.639 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.639 { 00:15:08.639 "cntlid": 9, 00:15:08.639 "qid": 0, 00:15:08.639 "state": "enabled", 00:15:08.639 "thread": "nvmf_tgt_poll_group_000", 00:15:08.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:08.639 "listen_address": { 00:15:08.639 "trtype": "TCP", 00:15:08.639 "adrfam": "IPv4", 00:15:08.639 "traddr": "10.0.0.2", 00:15:08.639 "trsvcid": "4420" 00:15:08.639 }, 00:15:08.639 "peer_address": { 00:15:08.639 "trtype": "TCP", 00:15:08.640 "adrfam": "IPv4", 00:15:08.640 "traddr": "10.0.0.1", 00:15:08.640 "trsvcid": "50648" 00:15:08.640 }, 00:15:08.640 "auth": { 00:15:08.640 "state": "completed", 00:15:08.640 "digest": "sha256", 00:15:08.640 "dhgroup": "ffdhe2048" 00:15:08.640 } 00:15:08.640 } 00:15:08.640 ]' 00:15:08.640 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.640 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.640 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.640 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:08.640 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.640 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.640 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.640 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.898 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:08.898 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:09.832 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.832 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:09.832 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.832 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.832 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.832 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.832 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:09.832 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.091 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.657 00:15:10.657 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.657 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.657 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.914 { 00:15:10.914 "cntlid": 11, 00:15:10.914 "qid": 0, 00:15:10.914 "state": "enabled", 00:15:10.914 "thread": "nvmf_tgt_poll_group_000", 00:15:10.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:10.914 "listen_address": { 00:15:10.914 "trtype": "TCP", 00:15:10.914 "adrfam": "IPv4", 00:15:10.914 "traddr": "10.0.0.2", 00:15:10.914 "trsvcid": "4420" 00:15:10.914 }, 00:15:10.914 "peer_address": { 00:15:10.914 "trtype": "TCP", 00:15:10.914 "adrfam": "IPv4", 00:15:10.914 "traddr": "10.0.0.1", 00:15:10.914 "trsvcid": "40742" 00:15:10.914 }, 00:15:10.914 "auth": { 00:15:10.914 "state": "completed", 00:15:10.914 "digest": "sha256", 00:15:10.914 "dhgroup": "ffdhe2048" 00:15:10.914 } 00:15:10.914 } 00:15:10.914 ]' 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.914 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.172 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:15:11.172 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:15:12.106 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.106 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:12.106 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.106 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.106 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.106 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.106 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.106 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.365 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.930 00:15:12.930 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.930 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.930 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.188 { 00:15:13.188 "cntlid": 13, 00:15:13.188 "qid": 0, 00:15:13.188 "state": "enabled", 00:15:13.188 "thread": "nvmf_tgt_poll_group_000", 00:15:13.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:13.188 "listen_address": { 00:15:13.188 "trtype": "TCP", 00:15:13.188 "adrfam": "IPv4", 00:15:13.188 "traddr": "10.0.0.2", 00:15:13.188 "trsvcid": "4420" 00:15:13.188 }, 00:15:13.188 "peer_address": { 00:15:13.188 "trtype": "TCP", 00:15:13.188 "adrfam": "IPv4", 00:15:13.188 "traddr": "10.0.0.1", 00:15:13.188 "trsvcid": "40776" 00:15:13.188 }, 00:15:13.188 "auth": { 00:15:13.188 "state": "completed", 00:15:13.188 "digest": "sha256", 00:15:13.188 "dhgroup": "ffdhe2048" 00:15:13.188 } 00:15:13.188 } 00:15:13.188 ]' 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.188 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.446 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:15:13.446 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:15:14.432 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.432 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:14.432 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.432 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.432 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.432 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.433 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:14.433 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.690 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.948 00:15:14.948 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.948 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.948 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.514 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.514 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.514 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.514 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.514 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.514 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.514 { 00:15:15.514 "cntlid": 15, 00:15:15.514 "qid": 0, 00:15:15.514 "state": "enabled", 00:15:15.514 "thread": "nvmf_tgt_poll_group_000", 00:15:15.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:15.514 "listen_address": { 00:15:15.514 "trtype": "TCP", 00:15:15.514 "adrfam": "IPv4", 00:15:15.514 "traddr": "10.0.0.2", 00:15:15.514 "trsvcid": "4420" 00:15:15.514 }, 00:15:15.514 "peer_address": { 00:15:15.514 "trtype": "TCP", 00:15:15.514 "adrfam": "IPv4", 00:15:15.514 "traddr": "10.0.0.1", 00:15:15.514 "trsvcid": "40814" 00:15:15.514 }, 00:15:15.514 "auth": { 00:15:15.514 "state": "completed", 00:15:15.514 "digest": "sha256", 00:15:15.514 "dhgroup": "ffdhe2048" 00:15:15.514 } 00:15:15.514 } 00:15:15.514 ]' 00:15:15.514 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.514 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.514 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.514 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:15.514 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.514 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.514 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.514 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.772 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:15:15.772 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:15:16.707 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.707 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:16.707 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.707 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.707 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.707 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.707 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.707 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:16.707 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.965 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.530 00:15:17.530 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.530 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.530 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.788 { 00:15:17.788 "cntlid": 17, 00:15:17.788 "qid": 0, 00:15:17.788 "state": "enabled", 00:15:17.788 "thread": "nvmf_tgt_poll_group_000", 00:15:17.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:17.788 "listen_address": { 00:15:17.788 "trtype": "TCP", 00:15:17.788 "adrfam": "IPv4", 00:15:17.788 "traddr": "10.0.0.2", 00:15:17.788 "trsvcid": "4420" 00:15:17.788 }, 00:15:17.788 "peer_address": { 00:15:17.788 "trtype": "TCP", 00:15:17.788 "adrfam": "IPv4", 00:15:17.788 "traddr": "10.0.0.1", 00:15:17.788 "trsvcid": "40846" 00:15:17.788 }, 00:15:17.788 "auth": { 00:15:17.788 "state": "completed", 00:15:17.788 "digest": "sha256", 00:15:17.788 "dhgroup": "ffdhe3072" 00:15:17.788 } 00:15:17.788 } 00:15:17.788 ]' 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.788 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.045 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:18.045 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:18.979 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.979 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:18.979 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.979 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.979 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.979 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.979 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.979 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.237 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.803 00:15:19.803 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.803 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.803 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.061 { 00:15:20.061 "cntlid": 19, 00:15:20.061 "qid": 0, 00:15:20.061 "state": "enabled", 00:15:20.061 "thread": "nvmf_tgt_poll_group_000", 00:15:20.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:20.061 "listen_address": { 00:15:20.061 "trtype": "TCP", 00:15:20.061 "adrfam": "IPv4", 00:15:20.061 "traddr": "10.0.0.2", 00:15:20.061 "trsvcid": "4420" 00:15:20.061 }, 00:15:20.061 "peer_address": { 00:15:20.061 "trtype": "TCP", 00:15:20.061 "adrfam": "IPv4", 00:15:20.061 "traddr": "10.0.0.1", 00:15:20.061 "trsvcid": "40872" 00:15:20.061 }, 00:15:20.061 "auth": { 00:15:20.061 "state": "completed", 00:15:20.061 "digest": "sha256", 00:15:20.061 "dhgroup": "ffdhe3072" 00:15:20.061 } 00:15:20.061 } 00:15:20.061 ]' 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.061 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.319 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:15:20.319 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:15:21.253 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.253 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:21.253 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.253 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.253 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.253 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.253 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:21.253 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.512 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.079 00:15:22.079 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.079 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.079 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.337 { 00:15:22.337 "cntlid": 21, 00:15:22.337 "qid": 0, 00:15:22.337 "state": "enabled", 00:15:22.337 "thread": "nvmf_tgt_poll_group_000", 00:15:22.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:22.337 "listen_address": { 00:15:22.337 "trtype": "TCP", 00:15:22.337 "adrfam": "IPv4", 00:15:22.337 "traddr": "10.0.0.2", 00:15:22.337 "trsvcid": "4420" 00:15:22.337 }, 00:15:22.337 "peer_address": { 00:15:22.337 "trtype": "TCP", 00:15:22.337 "adrfam": "IPv4", 00:15:22.337 "traddr": "10.0.0.1", 00:15:22.337 "trsvcid": "34972" 00:15:22.337 }, 00:15:22.337 "auth": { 00:15:22.337 "state": "completed", 00:15:22.337 "digest": "sha256", 00:15:22.337 "dhgroup": "ffdhe3072" 00:15:22.337 } 00:15:22.337 } 00:15:22.337 ]' 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.337 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.597 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:15:22.597 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:15:23.531 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.531 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:23.531 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.531 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.531 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.531 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.531 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.531 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.116 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.374 00:15:24.374 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.374 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.374 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.633 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.633 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.633 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.633 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.633 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.633 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.633 { 00:15:24.633 "cntlid": 23, 00:15:24.633 "qid": 0, 00:15:24.633 "state": "enabled", 00:15:24.633 "thread": "nvmf_tgt_poll_group_000", 00:15:24.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:24.633 "listen_address": { 00:15:24.633 "trtype": "TCP", 00:15:24.633 "adrfam": "IPv4", 00:15:24.633 "traddr": "10.0.0.2", 00:15:24.633 "trsvcid": "4420" 00:15:24.633 }, 00:15:24.633 "peer_address": { 00:15:24.633 "trtype": "TCP", 00:15:24.633 "adrfam": "IPv4", 00:15:24.633 "traddr": "10.0.0.1", 00:15:24.633 "trsvcid": "35000" 00:15:24.633 }, 00:15:24.633 "auth": { 00:15:24.633 "state": "completed", 00:15:24.633 "digest": "sha256", 00:15:24.633 "dhgroup": "ffdhe3072" 00:15:24.633 } 00:15:24.633 } 00:15:24.633 ]' 00:15:24.633 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.633 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.633 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.633 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.633 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.891 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.891 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.891 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.149 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:15:25.149 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:15:26.083 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.083 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:26.083 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.083 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.083 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.083 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.083 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.083 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.083 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.341 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:26.341 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.341 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.341 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:26.341 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:26.342 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.342 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.342 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.342 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.342 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.342 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.342 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.342 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.907 00:15:26.907 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.907 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.907 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.166 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.166 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.166 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.166 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.166 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.166 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.166 { 00:15:27.166 "cntlid": 25, 00:15:27.166 "qid": 0, 00:15:27.166 "state": "enabled", 00:15:27.166 "thread": "nvmf_tgt_poll_group_000", 00:15:27.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:27.166 "listen_address": { 00:15:27.166 "trtype": "TCP", 00:15:27.166 "adrfam": "IPv4", 00:15:27.166 "traddr": "10.0.0.2", 00:15:27.166 "trsvcid": "4420" 00:15:27.166 }, 00:15:27.166 "peer_address": { 00:15:27.166 "trtype": "TCP", 00:15:27.166 "adrfam": "IPv4", 00:15:27.166 "traddr": "10.0.0.1", 00:15:27.166 "trsvcid": "35024" 00:15:27.166 }, 00:15:27.166 "auth": { 00:15:27.166 "state": "completed", 00:15:27.166 "digest": "sha256", 00:15:27.166 "dhgroup": "ffdhe4096" 00:15:27.166 } 00:15:27.166 } 00:15:27.166 ]' 00:15:27.166 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.166 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.166 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.166 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.166 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.424 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.424 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.424 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.682 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:27.682 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:28.616 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.616 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:28.616 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.616 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.616 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.616 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.616 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.616 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.874 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.131 00:15:29.389 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.389 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.389 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.647 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.647 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.647 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.647 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.647 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.647 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.647 { 00:15:29.647 "cntlid": 27, 00:15:29.647 "qid": 0, 00:15:29.647 "state": "enabled", 00:15:29.647 "thread": "nvmf_tgt_poll_group_000", 00:15:29.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:29.647 "listen_address": { 00:15:29.647 "trtype": "TCP", 00:15:29.647 "adrfam": "IPv4", 00:15:29.647 "traddr": "10.0.0.2", 00:15:29.647 "trsvcid": "4420" 00:15:29.647 }, 00:15:29.647 "peer_address": { 00:15:29.647 "trtype": "TCP", 00:15:29.647 "adrfam": "IPv4", 00:15:29.647 "traddr": "10.0.0.1", 00:15:29.647 "trsvcid": "35056" 00:15:29.647 }, 00:15:29.647 "auth": { 00:15:29.647 "state": "completed", 00:15:29.647 "digest": "sha256", 00:15:29.647 "dhgroup": "ffdhe4096" 00:15:29.647 } 00:15:29.647 } 00:15:29.647 ]' 00:15:29.647 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.647 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.648 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.648 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:29.648 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.648 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.648 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.648 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.906 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:15:29.906 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:15:30.839 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.839 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:30.839 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.839 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.839 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.839 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.839 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:30.839 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:31.097 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:31.097 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.097 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.097 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:31.097 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:31.097 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.097 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.097 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.097 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.098 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.098 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.098 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.098 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.664 00:15:31.664 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.664 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.664 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.922 { 00:15:31.922 "cntlid": 29, 00:15:31.922 "qid": 0, 00:15:31.922 "state": "enabled", 00:15:31.922 "thread": "nvmf_tgt_poll_group_000", 00:15:31.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:31.922 "listen_address": { 00:15:31.922 "trtype": "TCP", 00:15:31.922 "adrfam": "IPv4", 00:15:31.922 "traddr": "10.0.0.2", 00:15:31.922 "trsvcid": "4420" 00:15:31.922 }, 00:15:31.922 "peer_address": { 00:15:31.922 "trtype": "TCP", 00:15:31.922 "adrfam": "IPv4", 00:15:31.922 "traddr": "10.0.0.1", 00:15:31.922 "trsvcid": "51296" 00:15:31.922 }, 00:15:31.922 "auth": { 00:15:31.922 "state": "completed", 00:15:31.922 "digest": "sha256", 00:15:31.922 "dhgroup": "ffdhe4096" 00:15:31.922 } 00:15:31.922 } 00:15:31.922 ]' 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.922 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.180 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:15:32.180 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:15:33.115 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.115 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:33.115 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.115 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.115 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.115 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.115 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:33.115 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.685 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.943 00:15:33.943 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.943 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.943 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.202 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.202 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.202 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.202 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.202 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.202 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.202 { 00:15:34.202 "cntlid": 31, 00:15:34.202 "qid": 0, 00:15:34.202 "state": "enabled", 00:15:34.202 "thread": "nvmf_tgt_poll_group_000", 00:15:34.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:34.202 "listen_address": { 00:15:34.202 "trtype": "TCP", 00:15:34.202 "adrfam": "IPv4", 00:15:34.202 "traddr": "10.0.0.2", 00:15:34.202 "trsvcid": "4420" 00:15:34.202 }, 00:15:34.202 "peer_address": { 00:15:34.202 "trtype": "TCP", 00:15:34.202 "adrfam": "IPv4", 00:15:34.202 "traddr": "10.0.0.1", 00:15:34.202 "trsvcid": "51324" 00:15:34.202 }, 00:15:34.202 "auth": { 00:15:34.202 "state": "completed", 00:15:34.202 "digest": "sha256", 00:15:34.202 "dhgroup": "ffdhe4096" 00:15:34.202 } 00:15:34.202 } 00:15:34.202 ]' 00:15:34.202 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.202 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.202 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.460 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:34.460 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.460 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.460 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.460 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.718 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:15:34.718 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:15:35.653 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.653 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:35.653 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.653 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.653 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.653 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.653 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.653 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:35.653 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.911 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.477 00:15:36.477 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.477 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.477 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.736 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.736 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.736 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.736 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.736 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.736 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.736 { 00:15:36.736 "cntlid": 33, 00:15:36.736 "qid": 0, 00:15:36.736 "state": "enabled", 00:15:36.736 "thread": "nvmf_tgt_poll_group_000", 00:15:36.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:36.736 "listen_address": { 00:15:36.736 "trtype": "TCP", 00:15:36.736 "adrfam": "IPv4", 00:15:36.736 "traddr": "10.0.0.2", 00:15:36.736 "trsvcid": "4420" 00:15:36.736 }, 00:15:36.736 "peer_address": { 00:15:36.736 "trtype": "TCP", 00:15:36.736 "adrfam": "IPv4", 00:15:36.736 "traddr": "10.0.0.1", 00:15:36.736 "trsvcid": "51354" 00:15:36.736 }, 00:15:36.736 "auth": { 00:15:36.736 "state": "completed", 00:15:36.736 "digest": "sha256", 00:15:36.736 "dhgroup": "ffdhe6144" 00:15:36.736 } 00:15:36.736 } 00:15:36.736 ]' 00:15:36.736 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.736 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.736 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.994 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.994 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.994 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.994 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.994 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.252 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:37.252 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:38.186 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.186 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:38.186 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.186 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.186 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.186 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.186 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:38.186 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.752 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.317 00:15:39.317 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.317 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.317 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.317 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.317 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.317 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.317 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.317 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.317 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.317 { 00:15:39.317 "cntlid": 35, 00:15:39.317 "qid": 0, 00:15:39.317 "state": "enabled", 00:15:39.317 "thread": "nvmf_tgt_poll_group_000", 00:15:39.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:39.317 "listen_address": { 00:15:39.317 "trtype": "TCP", 00:15:39.317 "adrfam": "IPv4", 00:15:39.317 "traddr": "10.0.0.2", 00:15:39.317 "trsvcid": "4420" 00:15:39.317 }, 00:15:39.317 "peer_address": { 00:15:39.317 "trtype": "TCP", 00:15:39.317 "adrfam": "IPv4", 00:15:39.317 "traddr": "10.0.0.1", 00:15:39.317 "trsvcid": "51364" 00:15:39.317 }, 00:15:39.317 "auth": { 00:15:39.317 "state": "completed", 00:15:39.317 "digest": "sha256", 00:15:39.317 "dhgroup": "ffdhe6144" 00:15:39.317 } 00:15:39.318 } 00:15:39.318 ]' 00:15:39.576 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.576 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.576 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.576 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:39.576 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.576 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.576 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.576 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.834 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:15:39.834 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:15:40.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:40.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:40.768 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.026 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.593 00:15:41.593 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.593 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.593 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.159 { 00:15:42.159 "cntlid": 37, 00:15:42.159 "qid": 0, 00:15:42.159 "state": "enabled", 00:15:42.159 "thread": "nvmf_tgt_poll_group_000", 00:15:42.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:42.159 "listen_address": { 00:15:42.159 "trtype": "TCP", 00:15:42.159 "adrfam": "IPv4", 00:15:42.159 "traddr": "10.0.0.2", 00:15:42.159 "trsvcid": "4420" 00:15:42.159 }, 00:15:42.159 "peer_address": { 00:15:42.159 "trtype": "TCP", 00:15:42.159 "adrfam": "IPv4", 00:15:42.159 "traddr": "10.0.0.1", 00:15:42.159 "trsvcid": "60066" 00:15:42.159 }, 00:15:42.159 "auth": { 00:15:42.159 "state": "completed", 00:15:42.159 "digest": "sha256", 00:15:42.159 "dhgroup": "ffdhe6144" 00:15:42.159 } 00:15:42.159 } 00:15:42.159 ]' 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.159 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.417 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:15:42.417 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:15:43.352 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.352 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:43.352 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.353 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.353 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.353 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.353 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.353 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.611 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:44.200 00:15:44.200 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.200 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.200 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.479 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.479 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.479 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.479 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.479 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.479 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.479 { 00:15:44.479 "cntlid": 39, 00:15:44.479 "qid": 0, 00:15:44.479 "state": "enabled", 00:15:44.479 "thread": "nvmf_tgt_poll_group_000", 00:15:44.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:44.479 "listen_address": { 00:15:44.479 "trtype": "TCP", 00:15:44.479 "adrfam": "IPv4", 00:15:44.479 "traddr": "10.0.0.2", 00:15:44.479 "trsvcid": "4420" 00:15:44.479 }, 00:15:44.479 "peer_address": { 00:15:44.479 "trtype": "TCP", 00:15:44.479 "adrfam": "IPv4", 00:15:44.479 "traddr": "10.0.0.1", 00:15:44.479 "trsvcid": "60096" 00:15:44.479 }, 00:15:44.480 "auth": { 00:15:44.480 "state": "completed", 00:15:44.480 "digest": "sha256", 00:15:44.480 "dhgroup": "ffdhe6144" 00:15:44.480 } 00:15:44.480 } 00:15:44.480 ]' 00:15:44.480 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.480 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.480 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.738 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:44.738 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.738 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.738 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.738 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.995 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:15:44.995 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:15:45.929 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.929 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:45.929 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.929 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.929 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.929 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.929 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.929 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:45.929 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.187 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.120 00:15:47.120 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.120 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.120 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.378 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.378 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.378 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.378 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.378 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.378 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.378 { 00:15:47.378 "cntlid": 41, 00:15:47.378 "qid": 0, 00:15:47.378 "state": "enabled", 00:15:47.378 "thread": "nvmf_tgt_poll_group_000", 00:15:47.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:47.378 "listen_address": { 00:15:47.378 "trtype": "TCP", 00:15:47.378 "adrfam": "IPv4", 00:15:47.378 "traddr": "10.0.0.2", 00:15:47.378 "trsvcid": "4420" 00:15:47.378 }, 00:15:47.378 "peer_address": { 00:15:47.378 "trtype": "TCP", 00:15:47.378 "adrfam": "IPv4", 00:15:47.378 "traddr": "10.0.0.1", 00:15:47.378 "trsvcid": "60122" 00:15:47.378 }, 00:15:47.378 "auth": { 00:15:47.378 "state": "completed", 00:15:47.378 "digest": "sha256", 00:15:47.378 "dhgroup": "ffdhe8192" 00:15:47.378 } 00:15:47.378 } 00:15:47.378 ]' 00:15:47.378 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.378 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.378 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.637 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.637 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.637 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.637 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.637 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.894 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:47.894 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:48.827 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.827 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:48.827 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.827 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.827 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.827 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.827 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.827 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.086 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.020 00:15:50.277 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.277 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.277 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.535 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.535 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.535 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.535 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.535 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.535 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.535 { 00:15:50.535 "cntlid": 43, 00:15:50.535 "qid": 0, 00:15:50.535 "state": "enabled", 00:15:50.535 "thread": "nvmf_tgt_poll_group_000", 00:15:50.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:50.535 "listen_address": { 00:15:50.535 "trtype": "TCP", 00:15:50.535 "adrfam": "IPv4", 00:15:50.535 "traddr": "10.0.0.2", 00:15:50.535 "trsvcid": "4420" 00:15:50.535 }, 00:15:50.535 "peer_address": { 00:15:50.535 "trtype": "TCP", 00:15:50.535 "adrfam": "IPv4", 00:15:50.535 "traddr": "10.0.0.1", 00:15:50.535 "trsvcid": "60148" 00:15:50.536 }, 00:15:50.536 "auth": { 00:15:50.536 "state": "completed", 00:15:50.536 "digest": "sha256", 00:15:50.536 "dhgroup": "ffdhe8192" 00:15:50.536 } 00:15:50.536 } 00:15:50.536 ]' 00:15:50.536 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.536 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.536 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.536 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:50.536 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.536 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.536 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.536 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.794 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:15:50.794 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.167 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.101 00:15:53.101 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.101 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.101 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.358 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.358 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.358 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.358 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.358 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.358 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.358 { 00:15:53.358 "cntlid": 45, 00:15:53.358 "qid": 0, 00:15:53.358 "state": "enabled", 00:15:53.358 "thread": "nvmf_tgt_poll_group_000", 00:15:53.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:53.358 "listen_address": { 00:15:53.358 "trtype": "TCP", 00:15:53.359 "adrfam": "IPv4", 00:15:53.359 "traddr": "10.0.0.2", 00:15:53.359 "trsvcid": "4420" 00:15:53.359 }, 00:15:53.359 "peer_address": { 00:15:53.359 "trtype": "TCP", 00:15:53.359 "adrfam": "IPv4", 00:15:53.359 "traddr": "10.0.0.1", 00:15:53.359 "trsvcid": "58494" 00:15:53.359 }, 00:15:53.359 "auth": { 00:15:53.359 "state": "completed", 00:15:53.359 "digest": "sha256", 00:15:53.359 "dhgroup": "ffdhe8192" 00:15:53.359 } 00:15:53.359 } 00:15:53.359 ]' 00:15:53.359 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.359 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.359 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.359 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.359 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.616 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.616 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.616 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.873 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:15:53.873 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:15:54.807 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.807 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:54.807 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.807 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.807 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.807 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.807 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:54.807 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.065 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.000 00:15:56.000 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.000 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.000 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.258 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.258 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.258 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.258 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.258 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.258 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.258 { 00:15:56.258 "cntlid": 47, 00:15:56.258 "qid": 0, 00:15:56.258 "state": "enabled", 00:15:56.258 "thread": "nvmf_tgt_poll_group_000", 00:15:56.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:56.258 "listen_address": { 00:15:56.258 "trtype": "TCP", 00:15:56.258 "adrfam": "IPv4", 00:15:56.258 "traddr": "10.0.0.2", 00:15:56.258 "trsvcid": "4420" 00:15:56.258 }, 00:15:56.258 "peer_address": { 00:15:56.258 "trtype": "TCP", 00:15:56.258 "adrfam": "IPv4", 00:15:56.258 "traddr": "10.0.0.1", 00:15:56.258 "trsvcid": "58540" 00:15:56.258 }, 00:15:56.258 "auth": { 00:15:56.258 "state": "completed", 00:15:56.258 "digest": "sha256", 00:15:56.258 "dhgroup": "ffdhe8192" 00:15:56.258 } 00:15:56.258 } 00:15:56.258 ]' 00:15:56.258 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.258 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.258 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.258 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:56.258 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.520 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.520 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.520 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.778 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:15:56.778 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:15:57.707 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.707 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:57.707 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.707 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.707 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.707 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:57.707 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.708 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.708 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:57.708 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.965 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.222 00:15:58.222 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.222 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.222 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.479 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.479 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.479 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.479 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.480 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.480 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.480 { 00:15:58.480 "cntlid": 49, 00:15:58.480 "qid": 0, 00:15:58.480 "state": "enabled", 00:15:58.480 "thread": "nvmf_tgt_poll_group_000", 00:15:58.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:58.480 "listen_address": { 00:15:58.480 "trtype": "TCP", 00:15:58.480 "adrfam": "IPv4", 00:15:58.480 "traddr": "10.0.0.2", 00:15:58.480 "trsvcid": "4420" 00:15:58.480 }, 00:15:58.480 "peer_address": { 00:15:58.480 "trtype": "TCP", 00:15:58.480 "adrfam": "IPv4", 00:15:58.480 "traddr": "10.0.0.1", 00:15:58.480 "trsvcid": "58568" 00:15:58.480 }, 00:15:58.480 "auth": { 00:15:58.480 "state": "completed", 00:15:58.480 "digest": "sha384", 00:15:58.480 "dhgroup": "null" 00:15:58.480 } 00:15:58.480 } 00:15:58.480 ]' 00:15:58.480 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.738 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.738 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.738 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:58.738 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.738 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.738 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.738 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.996 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:58.996 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:15:59.929 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.929 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.929 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.929 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.929 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.929 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.929 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.929 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.187 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.753 00:16:00.753 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.753 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.753 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.011 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.012 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.012 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.012 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.012 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.012 { 00:16:01.012 "cntlid": 51, 00:16:01.012 "qid": 0, 00:16:01.012 "state": "enabled", 00:16:01.012 "thread": "nvmf_tgt_poll_group_000", 00:16:01.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:01.012 "listen_address": { 00:16:01.012 "trtype": "TCP", 00:16:01.012 "adrfam": "IPv4", 00:16:01.012 "traddr": "10.0.0.2", 00:16:01.012 "trsvcid": "4420" 00:16:01.012 }, 00:16:01.012 "peer_address": { 00:16:01.012 "trtype": "TCP", 00:16:01.012 "adrfam": "IPv4", 00:16:01.012 "traddr": "10.0.0.1", 00:16:01.012 "trsvcid": "36854" 00:16:01.012 }, 00:16:01.012 "auth": { 00:16:01.012 "state": "completed", 00:16:01.012 "digest": "sha384", 00:16:01.012 "dhgroup": "null" 00:16:01.012 } 00:16:01.012 } 00:16:01.012 ]' 00:16:01.012 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.012 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.012 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.012 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:01.012 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.012 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.012 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.012 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.270 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:16:01.270 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:16:02.204 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.462 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:02.462 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.462 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.462 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.462 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.462 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:02.462 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.720 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.978 00:16:02.978 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.978 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.978 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.235 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.235 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.235 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.235 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.235 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.236 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.236 { 00:16:03.236 "cntlid": 53, 00:16:03.236 "qid": 0, 00:16:03.236 "state": "enabled", 00:16:03.236 "thread": "nvmf_tgt_poll_group_000", 00:16:03.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:03.236 "listen_address": { 00:16:03.236 "trtype": "TCP", 00:16:03.236 "adrfam": "IPv4", 00:16:03.236 "traddr": "10.0.0.2", 00:16:03.236 "trsvcid": "4420" 00:16:03.236 }, 00:16:03.236 "peer_address": { 00:16:03.236 "trtype": "TCP", 00:16:03.236 "adrfam": "IPv4", 00:16:03.236 "traddr": "10.0.0.1", 00:16:03.236 "trsvcid": "36884" 00:16:03.236 }, 00:16:03.236 "auth": { 00:16:03.236 "state": "completed", 00:16:03.236 "digest": "sha384", 00:16:03.236 "dhgroup": "null" 00:16:03.236 } 00:16:03.236 } 00:16:03.236 ]' 00:16:03.236 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.236 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.236 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.493 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:03.493 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.493 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.493 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.493 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.751 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:16:03.751 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:16:04.684 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.684 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:04.684 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.684 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.684 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.684 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.684 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:04.684 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.943 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.509 00:16:05.509 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.509 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.509 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.767 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.767 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.767 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.767 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.767 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.767 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.767 { 00:16:05.767 "cntlid": 55, 00:16:05.767 "qid": 0, 00:16:05.767 "state": "enabled", 00:16:05.767 "thread": "nvmf_tgt_poll_group_000", 00:16:05.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:05.767 "listen_address": { 00:16:05.767 "trtype": "TCP", 00:16:05.767 "adrfam": "IPv4", 00:16:05.767 "traddr": "10.0.0.2", 00:16:05.767 "trsvcid": "4420" 00:16:05.767 }, 00:16:05.767 "peer_address": { 00:16:05.767 "trtype": "TCP", 00:16:05.767 "adrfam": "IPv4", 00:16:05.767 "traddr": "10.0.0.1", 00:16:05.767 "trsvcid": "36914" 00:16:05.767 }, 00:16:05.767 "auth": { 00:16:05.767 "state": "completed", 00:16:05.767 "digest": "sha384", 00:16:05.767 "dhgroup": "null" 00:16:05.767 } 00:16:05.767 } 00:16:05.767 ]' 00:16:05.767 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.767 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.767 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.767 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:05.767 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.768 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.768 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.768 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.026 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:16:06.026 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:16:06.961 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.961 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.961 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.961 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.961 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.961 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.961 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.961 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:06.961 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.528 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.786 00:16:07.786 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.786 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.786 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.044 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.044 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.044 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.044 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.044 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.044 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.044 { 00:16:08.044 "cntlid": 57, 00:16:08.044 "qid": 0, 00:16:08.044 "state": "enabled", 00:16:08.044 "thread": "nvmf_tgt_poll_group_000", 00:16:08.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:08.044 "listen_address": { 00:16:08.044 "trtype": "TCP", 00:16:08.044 "adrfam": "IPv4", 00:16:08.044 "traddr": "10.0.0.2", 00:16:08.044 "trsvcid": "4420" 00:16:08.044 }, 00:16:08.044 "peer_address": { 00:16:08.044 "trtype": "TCP", 00:16:08.044 "adrfam": "IPv4", 00:16:08.044 "traddr": "10.0.0.1", 00:16:08.044 "trsvcid": "36932" 00:16:08.044 }, 00:16:08.044 "auth": { 00:16:08.044 "state": "completed", 00:16:08.044 "digest": "sha384", 00:16:08.044 "dhgroup": "ffdhe2048" 00:16:08.044 } 00:16:08.044 } 00:16:08.044 ]' 00:16:08.044 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.044 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.044 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.044 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:08.044 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.302 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.302 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.302 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.560 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:16:08.560 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:16:09.493 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.493 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:09.493 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.493 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.493 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.493 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.493 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:09.493 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.751 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.009 00:16:10.009 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.009 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.009 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.266 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.266 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.266 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.266 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.266 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.266 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.266 { 00:16:10.266 "cntlid": 59, 00:16:10.267 "qid": 0, 00:16:10.267 "state": "enabled", 00:16:10.267 "thread": "nvmf_tgt_poll_group_000", 00:16:10.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:10.267 "listen_address": { 00:16:10.267 "trtype": "TCP", 00:16:10.267 "adrfam": "IPv4", 00:16:10.267 "traddr": "10.0.0.2", 00:16:10.267 "trsvcid": "4420" 00:16:10.267 }, 00:16:10.267 "peer_address": { 00:16:10.267 "trtype": "TCP", 00:16:10.267 "adrfam": "IPv4", 00:16:10.267 "traddr": "10.0.0.1", 00:16:10.267 "trsvcid": "40852" 00:16:10.267 }, 00:16:10.267 "auth": { 00:16:10.267 "state": "completed", 00:16:10.267 "digest": "sha384", 00:16:10.267 "dhgroup": "ffdhe2048" 00:16:10.267 } 00:16:10.267 } 00:16:10.267 ]' 00:16:10.267 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.524 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.524 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.524 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.524 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.524 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.524 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.524 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.782 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:16:10.782 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:16:11.715 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.715 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:11.715 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.715 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.715 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.715 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.715 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:11.715 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.973 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.539 00:16:12.539 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.539 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.539 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.797 { 00:16:12.797 "cntlid": 61, 00:16:12.797 "qid": 0, 00:16:12.797 "state": "enabled", 00:16:12.797 "thread": "nvmf_tgt_poll_group_000", 00:16:12.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:12.797 "listen_address": { 00:16:12.797 "trtype": "TCP", 00:16:12.797 "adrfam": "IPv4", 00:16:12.797 "traddr": "10.0.0.2", 00:16:12.797 "trsvcid": "4420" 00:16:12.797 }, 00:16:12.797 "peer_address": { 00:16:12.797 "trtype": "TCP", 00:16:12.797 "adrfam": "IPv4", 00:16:12.797 "traddr": "10.0.0.1", 00:16:12.797 "trsvcid": "40882" 00:16:12.797 }, 00:16:12.797 "auth": { 00:16:12.797 "state": "completed", 00:16:12.797 "digest": "sha384", 00:16:12.797 "dhgroup": "ffdhe2048" 00:16:12.797 } 00:16:12.797 } 00:16:12.797 ]' 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.797 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.055 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:16:13.055 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:16:14.463 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.463 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:14.463 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.463 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.463 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.463 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.463 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:14.463 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.463 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.747 00:16:15.005 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.005 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.005 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.262 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.262 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.262 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.262 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.262 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.262 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.262 { 00:16:15.262 "cntlid": 63, 00:16:15.262 "qid": 0, 00:16:15.263 "state": "enabled", 00:16:15.263 "thread": "nvmf_tgt_poll_group_000", 00:16:15.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:15.263 "listen_address": { 00:16:15.263 "trtype": "TCP", 00:16:15.263 "adrfam": "IPv4", 00:16:15.263 "traddr": "10.0.0.2", 00:16:15.263 "trsvcid": "4420" 00:16:15.263 }, 00:16:15.263 "peer_address": { 00:16:15.263 "trtype": "TCP", 00:16:15.263 "adrfam": "IPv4", 00:16:15.263 "traddr": "10.0.0.1", 00:16:15.263 "trsvcid": "40912" 00:16:15.263 }, 00:16:15.263 "auth": { 00:16:15.263 "state": "completed", 00:16:15.263 "digest": "sha384", 00:16:15.263 "dhgroup": "ffdhe2048" 00:16:15.263 } 00:16:15.263 } 00:16:15.263 ]' 00:16:15.263 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.263 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.263 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.263 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.263 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.263 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.263 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.263 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.520 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:16:15.521 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:16:16.454 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.454 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:16.454 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.454 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.454 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.454 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.454 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.454 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:16.454 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.712 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.276 00:16:17.276 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.276 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.276 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.534 { 00:16:17.534 "cntlid": 65, 00:16:17.534 "qid": 0, 00:16:17.534 "state": "enabled", 00:16:17.534 "thread": "nvmf_tgt_poll_group_000", 00:16:17.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:17.534 "listen_address": { 00:16:17.534 "trtype": "TCP", 00:16:17.534 "adrfam": "IPv4", 00:16:17.534 "traddr": "10.0.0.2", 00:16:17.534 "trsvcid": "4420" 00:16:17.534 }, 00:16:17.534 "peer_address": { 00:16:17.534 "trtype": "TCP", 00:16:17.534 "adrfam": "IPv4", 00:16:17.534 "traddr": "10.0.0.1", 00:16:17.534 "trsvcid": "40936" 00:16:17.534 }, 00:16:17.534 "auth": { 00:16:17.534 "state": "completed", 00:16:17.534 "digest": "sha384", 00:16:17.534 "dhgroup": "ffdhe3072" 00:16:17.534 } 00:16:17.534 } 00:16:17.534 ]' 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.534 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.792 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:16:17.792 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.166 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.733 00:16:19.733 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.733 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.733 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.991 { 00:16:19.991 "cntlid": 67, 00:16:19.991 "qid": 0, 00:16:19.991 "state": "enabled", 00:16:19.991 "thread": "nvmf_tgt_poll_group_000", 00:16:19.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:19.991 "listen_address": { 00:16:19.991 "trtype": "TCP", 00:16:19.991 "adrfam": "IPv4", 00:16:19.991 "traddr": "10.0.0.2", 00:16:19.991 "trsvcid": "4420" 00:16:19.991 }, 00:16:19.991 "peer_address": { 00:16:19.991 "trtype": "TCP", 00:16:19.991 "adrfam": "IPv4", 00:16:19.991 "traddr": "10.0.0.1", 00:16:19.991 "trsvcid": "40972" 00:16:19.991 }, 00:16:19.991 "auth": { 00:16:19.991 "state": "completed", 00:16:19.991 "digest": "sha384", 00:16:19.991 "dhgroup": "ffdhe3072" 00:16:19.991 } 00:16:19.991 } 00:16:19.991 ]' 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.991 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.249 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:16:20.249 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:16:21.182 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.182 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:21.182 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.182 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.182 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.182 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.183 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:21.183 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:21.749 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:21.749 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.749 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.749 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:21.749 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:21.749 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.749 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.749 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.749 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.749 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.749 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.750 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.750 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.008 00:16:22.008 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.008 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.008 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.265 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.265 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.265 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.265 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.265 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.265 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.265 { 00:16:22.266 "cntlid": 69, 00:16:22.266 "qid": 0, 00:16:22.266 "state": "enabled", 00:16:22.266 "thread": "nvmf_tgt_poll_group_000", 00:16:22.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:22.266 "listen_address": { 00:16:22.266 "trtype": "TCP", 00:16:22.266 "adrfam": "IPv4", 00:16:22.266 "traddr": "10.0.0.2", 00:16:22.266 "trsvcid": "4420" 00:16:22.266 }, 00:16:22.266 "peer_address": { 00:16:22.266 "trtype": "TCP", 00:16:22.266 "adrfam": "IPv4", 00:16:22.266 "traddr": "10.0.0.1", 00:16:22.266 "trsvcid": "59842" 00:16:22.266 }, 00:16:22.266 "auth": { 00:16:22.266 "state": "completed", 00:16:22.266 "digest": "sha384", 00:16:22.266 "dhgroup": "ffdhe3072" 00:16:22.266 } 00:16:22.266 } 00:16:22.266 ]' 00:16:22.266 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.266 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.266 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.266 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.266 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.266 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.266 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.266 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.831 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:16:22.831 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:16:23.764 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.765 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:23.765 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.765 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.765 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.765 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.765 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:23.765 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:24.021 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:24.021 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.021 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.021 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:24.021 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.022 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.022 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:24.022 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.022 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.022 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.022 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.022 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.022 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.279 00:16:24.279 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.279 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.279 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.537 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.537 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.537 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.537 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.537 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.537 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.537 { 00:16:24.537 "cntlid": 71, 00:16:24.537 "qid": 0, 00:16:24.537 "state": "enabled", 00:16:24.537 "thread": "nvmf_tgt_poll_group_000", 00:16:24.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:24.537 "listen_address": { 00:16:24.537 "trtype": "TCP", 00:16:24.537 "adrfam": "IPv4", 00:16:24.537 "traddr": "10.0.0.2", 00:16:24.537 "trsvcid": "4420" 00:16:24.537 }, 00:16:24.537 "peer_address": { 00:16:24.537 "trtype": "TCP", 00:16:24.537 "adrfam": "IPv4", 00:16:24.537 "traddr": "10.0.0.1", 00:16:24.537 "trsvcid": "59870" 00:16:24.537 }, 00:16:24.537 "auth": { 00:16:24.537 "state": "completed", 00:16:24.537 "digest": "sha384", 00:16:24.537 "dhgroup": "ffdhe3072" 00:16:24.537 } 00:16:24.537 } 00:16:24.537 ]' 00:16:24.537 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.537 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.537 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.795 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:24.795 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.795 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.795 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.795 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.052 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:16:25.052 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:16:25.986 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.986 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.986 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.986 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.986 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.986 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.986 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.986 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:25.986 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.245 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.811 00:16:26.811 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.811 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.811 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.069 { 00:16:27.069 "cntlid": 73, 00:16:27.069 "qid": 0, 00:16:27.069 "state": "enabled", 00:16:27.069 "thread": "nvmf_tgt_poll_group_000", 00:16:27.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:27.069 "listen_address": { 00:16:27.069 "trtype": "TCP", 00:16:27.069 "adrfam": "IPv4", 00:16:27.069 "traddr": "10.0.0.2", 00:16:27.069 "trsvcid": "4420" 00:16:27.069 }, 00:16:27.069 "peer_address": { 00:16:27.069 "trtype": "TCP", 00:16:27.069 "adrfam": "IPv4", 00:16:27.069 "traddr": "10.0.0.1", 00:16:27.069 "trsvcid": "59902" 00:16:27.069 }, 00:16:27.069 "auth": { 00:16:27.069 "state": "completed", 00:16:27.069 "digest": "sha384", 00:16:27.069 "dhgroup": "ffdhe4096" 00:16:27.069 } 00:16:27.069 } 00:16:27.069 ]' 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.069 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.327 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:16:27.327 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:16:28.704 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.704 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:28.704 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.704 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.704 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.705 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.705 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:28.705 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.705 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.270 00:16:29.270 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.270 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.270 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.528 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.528 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.528 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.528 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.528 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.528 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.528 { 00:16:29.528 "cntlid": 75, 00:16:29.528 "qid": 0, 00:16:29.529 "state": "enabled", 00:16:29.529 "thread": "nvmf_tgt_poll_group_000", 00:16:29.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:29.529 "listen_address": { 00:16:29.529 "trtype": "TCP", 00:16:29.529 "adrfam": "IPv4", 00:16:29.529 "traddr": "10.0.0.2", 00:16:29.529 "trsvcid": "4420" 00:16:29.529 }, 00:16:29.529 "peer_address": { 00:16:29.529 "trtype": "TCP", 00:16:29.529 "adrfam": "IPv4", 00:16:29.529 "traddr": "10.0.0.1", 00:16:29.529 "trsvcid": "59938" 00:16:29.529 }, 00:16:29.529 "auth": { 00:16:29.529 "state": "completed", 00:16:29.529 "digest": "sha384", 00:16:29.529 "dhgroup": "ffdhe4096" 00:16:29.529 } 00:16:29.529 } 00:16:29.529 ]' 00:16:29.529 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.529 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.529 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.529 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:29.529 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.529 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.529 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.529 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.787 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:16:29.787 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:16:30.722 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.722 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.722 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.722 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.722 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.722 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.722 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:30.722 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.288 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.546 00:16:31.546 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.546 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.546 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.804 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.804 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.804 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.804 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.804 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.804 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.804 { 00:16:31.804 "cntlid": 77, 00:16:31.804 "qid": 0, 00:16:31.804 "state": "enabled", 00:16:31.804 "thread": "nvmf_tgt_poll_group_000", 00:16:31.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:31.804 "listen_address": { 00:16:31.804 "trtype": "TCP", 00:16:31.804 "adrfam": "IPv4", 00:16:31.804 "traddr": "10.0.0.2", 00:16:31.804 "trsvcid": "4420" 00:16:31.804 }, 00:16:31.804 "peer_address": { 00:16:31.804 "trtype": "TCP", 00:16:31.804 "adrfam": "IPv4", 00:16:31.804 "traddr": "10.0.0.1", 00:16:31.804 "trsvcid": "58418" 00:16:31.804 }, 00:16:31.804 "auth": { 00:16:31.804 "state": "completed", 00:16:31.804 "digest": "sha384", 00:16:31.804 "dhgroup": "ffdhe4096" 00:16:31.804 } 00:16:31.804 } 00:16:31.804 ]' 00:16:31.804 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.804 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.804 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.062 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.062 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.062 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.062 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.062 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.320 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:16:32.320 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:16:33.253 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.253 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:33.253 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.253 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.253 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.253 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.253 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:33.253 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:33.511 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:33.511 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.511 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.511 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:33.511 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.511 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.511 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:33.512 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.512 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.512 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.512 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.512 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.512 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.097 00:16:34.097 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.097 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.097 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.355 { 00:16:34.355 "cntlid": 79, 00:16:34.355 "qid": 0, 00:16:34.355 "state": "enabled", 00:16:34.355 "thread": "nvmf_tgt_poll_group_000", 00:16:34.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:34.355 "listen_address": { 00:16:34.355 "trtype": "TCP", 00:16:34.355 "adrfam": "IPv4", 00:16:34.355 "traddr": "10.0.0.2", 00:16:34.355 "trsvcid": "4420" 00:16:34.355 }, 00:16:34.355 "peer_address": { 00:16:34.355 "trtype": "TCP", 00:16:34.355 "adrfam": "IPv4", 00:16:34.355 "traddr": "10.0.0.1", 00:16:34.355 "trsvcid": "58444" 00:16:34.355 }, 00:16:34.355 "auth": { 00:16:34.355 "state": "completed", 00:16:34.355 "digest": "sha384", 00:16:34.355 "dhgroup": "ffdhe4096" 00:16:34.355 } 00:16:34.355 } 00:16:34.355 ]' 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.355 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.613 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:16:34.613 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:16:35.547 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.547 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:35.547 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.547 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.547 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.547 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.547 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.547 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:35.547 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.113 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.679 00:16:36.679 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.679 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.679 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.937 { 00:16:36.937 "cntlid": 81, 00:16:36.937 "qid": 0, 00:16:36.937 "state": "enabled", 00:16:36.937 "thread": "nvmf_tgt_poll_group_000", 00:16:36.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:36.937 "listen_address": { 00:16:36.937 "trtype": "TCP", 00:16:36.937 "adrfam": "IPv4", 00:16:36.937 "traddr": "10.0.0.2", 00:16:36.937 "trsvcid": "4420" 00:16:36.937 }, 00:16:36.937 "peer_address": { 00:16:36.937 "trtype": "TCP", 00:16:36.937 "adrfam": "IPv4", 00:16:36.937 "traddr": "10.0.0.1", 00:16:36.937 "trsvcid": "58474" 00:16:36.937 }, 00:16:36.937 "auth": { 00:16:36.937 "state": "completed", 00:16:36.937 "digest": "sha384", 00:16:36.937 "dhgroup": "ffdhe6144" 00:16:36.937 } 00:16:36.937 } 00:16:36.937 ]' 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.937 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.195 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:16:37.195 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:16:38.569 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.569 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:38.569 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.569 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.569 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.569 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.569 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:38.569 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.569 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.137 00:16:39.137 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.137 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.137 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.396 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.396 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.396 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.396 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.396 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.396 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.396 { 00:16:39.396 "cntlid": 83, 00:16:39.396 "qid": 0, 00:16:39.396 "state": "enabled", 00:16:39.396 "thread": "nvmf_tgt_poll_group_000", 00:16:39.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:39.396 "listen_address": { 00:16:39.396 "trtype": "TCP", 00:16:39.396 "adrfam": "IPv4", 00:16:39.396 "traddr": "10.0.0.2", 00:16:39.396 "trsvcid": "4420" 00:16:39.396 }, 00:16:39.396 "peer_address": { 00:16:39.396 "trtype": "TCP", 00:16:39.396 "adrfam": "IPv4", 00:16:39.396 "traddr": "10.0.0.1", 00:16:39.396 "trsvcid": "58506" 00:16:39.396 }, 00:16:39.396 "auth": { 00:16:39.396 "state": "completed", 00:16:39.396 "digest": "sha384", 00:16:39.396 "dhgroup": "ffdhe6144" 00:16:39.396 } 00:16:39.396 } 00:16:39.396 ]' 00:16:39.396 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.654 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.654 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.654 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.654 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.654 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.654 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.654 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.912 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:16:39.912 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:16:40.848 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.848 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:40.848 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.848 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.848 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.848 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.848 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.848 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.106 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:41.106 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.106 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.106 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.106 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:41.106 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.106 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.106 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.106 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.106 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.107 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.107 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.107 16:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.041 00:16:42.041 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.041 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.041 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.041 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.041 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.041 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.041 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.041 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.041 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.041 { 00:16:42.041 "cntlid": 85, 00:16:42.041 "qid": 0, 00:16:42.041 "state": "enabled", 00:16:42.041 "thread": "nvmf_tgt_poll_group_000", 00:16:42.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:42.041 "listen_address": { 00:16:42.041 "trtype": "TCP", 00:16:42.041 "adrfam": "IPv4", 00:16:42.041 "traddr": "10.0.0.2", 00:16:42.041 "trsvcid": "4420" 00:16:42.041 }, 00:16:42.041 "peer_address": { 00:16:42.041 "trtype": "TCP", 00:16:42.041 "adrfam": "IPv4", 00:16:42.041 "traddr": "10.0.0.1", 00:16:42.041 "trsvcid": "57850" 00:16:42.041 }, 00:16:42.041 "auth": { 00:16:42.041 "state": "completed", 00:16:42.041 "digest": "sha384", 00:16:42.041 "dhgroup": "ffdhe6144" 00:16:42.041 } 00:16:42.041 } 00:16:42.041 ]' 00:16:42.041 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.041 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.041 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.299 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.299 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.299 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.299 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.299 16:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.559 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:16:42.559 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:16:43.497 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.497 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:43.497 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.498 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.498 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.498 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.498 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.498 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.760 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:43.760 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.760 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.760 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.760 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.760 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.760 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:43.760 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.760 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.760 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.760 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.761 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.761 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.330 00:16:44.330 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.330 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.330 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.589 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.589 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.589 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.589 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.589 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.589 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.589 { 00:16:44.589 "cntlid": 87, 00:16:44.589 "qid": 0, 00:16:44.589 "state": "enabled", 00:16:44.589 "thread": "nvmf_tgt_poll_group_000", 00:16:44.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:44.589 "listen_address": { 00:16:44.589 "trtype": "TCP", 00:16:44.589 "adrfam": "IPv4", 00:16:44.589 "traddr": "10.0.0.2", 00:16:44.589 "trsvcid": "4420" 00:16:44.589 }, 00:16:44.589 "peer_address": { 00:16:44.589 "trtype": "TCP", 00:16:44.589 "adrfam": "IPv4", 00:16:44.589 "traddr": "10.0.0.1", 00:16:44.589 "trsvcid": "57884" 00:16:44.589 }, 00:16:44.589 "auth": { 00:16:44.589 "state": "completed", 00:16:44.589 "digest": "sha384", 00:16:44.589 "dhgroup": "ffdhe6144" 00:16:44.589 } 00:16:44.589 } 00:16:44.589 ]' 00:16:44.589 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.589 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.589 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.589 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.589 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.847 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.847 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.847 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.134 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:16:45.134 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:16:46.098 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.098 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:46.098 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.098 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.098 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.098 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.098 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.098 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:46.098 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.356 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.292 00:16:47.292 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.292 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.292 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.550 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.550 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.550 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.550 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.550 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.550 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.550 { 00:16:47.550 "cntlid": 89, 00:16:47.550 "qid": 0, 00:16:47.550 "state": "enabled", 00:16:47.550 "thread": "nvmf_tgt_poll_group_000", 00:16:47.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:47.550 "listen_address": { 00:16:47.550 "trtype": "TCP", 00:16:47.550 "adrfam": "IPv4", 00:16:47.550 "traddr": "10.0.0.2", 00:16:47.550 "trsvcid": "4420" 00:16:47.550 }, 00:16:47.550 "peer_address": { 00:16:47.550 "trtype": "TCP", 00:16:47.550 "adrfam": "IPv4", 00:16:47.550 "traddr": "10.0.0.1", 00:16:47.550 "trsvcid": "57912" 00:16:47.550 }, 00:16:47.550 "auth": { 00:16:47.550 "state": "completed", 00:16:47.550 "digest": "sha384", 00:16:47.550 "dhgroup": "ffdhe8192" 00:16:47.550 } 00:16:47.550 } 00:16:47.550 ]' 00:16:47.550 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.550 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.550 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.550 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.550 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.808 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.809 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.809 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.067 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:16:48.067 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:16:49.003 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.003 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:49.003 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.003 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.003 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.003 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.003 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:49.003 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:49.261 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:49.261 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.261 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.261 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:49.261 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.261 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.261 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.261 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.261 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.261 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.261 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.262 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.262 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.200 00:16:50.200 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.200 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.200 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.459 { 00:16:50.459 "cntlid": 91, 00:16:50.459 "qid": 0, 00:16:50.459 "state": "enabled", 00:16:50.459 "thread": "nvmf_tgt_poll_group_000", 00:16:50.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:50.459 "listen_address": { 00:16:50.459 "trtype": "TCP", 00:16:50.459 "adrfam": "IPv4", 00:16:50.459 "traddr": "10.0.0.2", 00:16:50.459 "trsvcid": "4420" 00:16:50.459 }, 00:16:50.459 "peer_address": { 00:16:50.459 "trtype": "TCP", 00:16:50.459 "adrfam": "IPv4", 00:16:50.459 "traddr": "10.0.0.1", 00:16:50.459 "trsvcid": "57942" 00:16:50.459 }, 00:16:50.459 "auth": { 00:16:50.459 "state": "completed", 00:16:50.459 "digest": "sha384", 00:16:50.459 "dhgroup": "ffdhe8192" 00:16:50.459 } 00:16:50.459 } 00:16:50.459 ]' 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.459 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.717 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.976 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:16:50.976 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:16:51.916 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.916 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:51.916 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.916 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.916 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.916 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.916 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:51.916 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.174 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:52.175 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.175 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.175 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.175 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.175 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.175 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.175 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.175 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.175 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.175 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.175 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.175 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.113 00:16:53.113 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.113 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.113 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.372 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.372 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.372 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.372 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.372 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.372 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.372 { 00:16:53.372 "cntlid": 93, 00:16:53.372 "qid": 0, 00:16:53.372 "state": "enabled", 00:16:53.372 "thread": "nvmf_tgt_poll_group_000", 00:16:53.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:53.372 "listen_address": { 00:16:53.372 "trtype": "TCP", 00:16:53.372 "adrfam": "IPv4", 00:16:53.372 "traddr": "10.0.0.2", 00:16:53.372 "trsvcid": "4420" 00:16:53.372 }, 00:16:53.372 "peer_address": { 00:16:53.372 "trtype": "TCP", 00:16:53.372 "adrfam": "IPv4", 00:16:53.372 "traddr": "10.0.0.1", 00:16:53.372 "trsvcid": "48558" 00:16:53.372 }, 00:16:53.372 "auth": { 00:16:53.372 "state": "completed", 00:16:53.372 "digest": "sha384", 00:16:53.372 "dhgroup": "ffdhe8192" 00:16:53.372 } 00:16:53.372 } 00:16:53.372 ]' 00:16:53.372 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.372 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.372 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.372 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.372 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.888 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:16:53.888 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:16:54.826 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.826 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.826 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.826 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.826 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.826 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.826 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:54.826 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.085 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.021 00:16:56.021 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.021 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.021 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.279 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.279 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.279 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.279 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.279 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.279 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.279 { 00:16:56.279 "cntlid": 95, 00:16:56.279 "qid": 0, 00:16:56.279 "state": "enabled", 00:16:56.279 "thread": "nvmf_tgt_poll_group_000", 00:16:56.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:56.279 "listen_address": { 00:16:56.279 "trtype": "TCP", 00:16:56.279 "adrfam": "IPv4", 00:16:56.279 "traddr": "10.0.0.2", 00:16:56.279 "trsvcid": "4420" 00:16:56.279 }, 00:16:56.280 "peer_address": { 00:16:56.280 "trtype": "TCP", 00:16:56.280 "adrfam": "IPv4", 00:16:56.280 "traddr": "10.0.0.1", 00:16:56.280 "trsvcid": "48598" 00:16:56.280 }, 00:16:56.280 "auth": { 00:16:56.280 "state": "completed", 00:16:56.280 "digest": "sha384", 00:16:56.280 "dhgroup": "ffdhe8192" 00:16:56.280 } 00:16:56.280 } 00:16:56.280 ]' 00:16:56.280 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.280 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.280 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.280 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.280 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.280 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.280 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.280 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.539 16:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:16:56.539 16:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:16:57.477 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.477 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:57.477 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.477 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.735 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.735 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:57.735 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.735 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.735 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.735 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.993 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.251 00:16:58.251 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.251 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.251 16:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.509 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.509 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.509 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.509 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.509 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.509 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.509 { 00:16:58.509 "cntlid": 97, 00:16:58.509 "qid": 0, 00:16:58.509 "state": "enabled", 00:16:58.509 "thread": "nvmf_tgt_poll_group_000", 00:16:58.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:58.509 "listen_address": { 00:16:58.509 "trtype": "TCP", 00:16:58.509 "adrfam": "IPv4", 00:16:58.509 "traddr": "10.0.0.2", 00:16:58.509 "trsvcid": "4420" 00:16:58.509 }, 00:16:58.509 "peer_address": { 00:16:58.509 "trtype": "TCP", 00:16:58.509 "adrfam": "IPv4", 00:16:58.509 "traddr": "10.0.0.1", 00:16:58.509 "trsvcid": "48632" 00:16:58.509 }, 00:16:58.509 "auth": { 00:16:58.509 "state": "completed", 00:16:58.509 "digest": "sha512", 00:16:58.509 "dhgroup": "null" 00:16:58.509 } 00:16:58.509 } 00:16:58.509 ]' 00:16:58.509 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.509 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.509 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.768 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:58.768 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.768 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.768 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.768 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.025 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:16:59.025 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:16:59.963 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.963 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.963 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.963 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.963 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.963 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.963 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.963 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.221 16:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.790 00:17:00.790 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.790 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.790 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.790 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.790 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.790 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.790 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.048 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.048 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.048 { 00:17:01.048 "cntlid": 99, 00:17:01.048 "qid": 0, 00:17:01.048 "state": "enabled", 00:17:01.048 "thread": "nvmf_tgt_poll_group_000", 00:17:01.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:01.048 "listen_address": { 00:17:01.048 "trtype": "TCP", 00:17:01.048 "adrfam": "IPv4", 00:17:01.048 "traddr": "10.0.0.2", 00:17:01.048 "trsvcid": "4420" 00:17:01.048 }, 00:17:01.048 "peer_address": { 00:17:01.048 "trtype": "TCP", 00:17:01.048 "adrfam": "IPv4", 00:17:01.048 "traddr": "10.0.0.1", 00:17:01.048 "trsvcid": "39018" 00:17:01.048 }, 00:17:01.048 "auth": { 00:17:01.048 "state": "completed", 00:17:01.048 "digest": "sha512", 00:17:01.048 "dhgroup": "null" 00:17:01.048 } 00:17:01.048 } 00:17:01.048 ]' 00:17:01.048 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.048 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.048 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.048 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.048 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.048 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.048 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.048 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.306 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:17:01.306 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:17:02.240 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.240 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:02.240 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.240 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.240 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.240 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.240 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.240 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.498 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.065 00:17:03.065 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.065 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.065 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.324 { 00:17:03.324 "cntlid": 101, 00:17:03.324 "qid": 0, 00:17:03.324 "state": "enabled", 00:17:03.324 "thread": "nvmf_tgt_poll_group_000", 00:17:03.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:03.324 "listen_address": { 00:17:03.324 "trtype": "TCP", 00:17:03.324 "adrfam": "IPv4", 00:17:03.324 "traddr": "10.0.0.2", 00:17:03.324 "trsvcid": "4420" 00:17:03.324 }, 00:17:03.324 "peer_address": { 00:17:03.324 "trtype": "TCP", 00:17:03.324 "adrfam": "IPv4", 00:17:03.324 "traddr": "10.0.0.1", 00:17:03.324 "trsvcid": "39036" 00:17:03.324 }, 00:17:03.324 "auth": { 00:17:03.324 "state": "completed", 00:17:03.324 "digest": "sha512", 00:17:03.324 "dhgroup": "null" 00:17:03.324 } 00:17:03.324 } 00:17:03.324 ]' 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.324 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.583 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:17:03.583 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:17:04.519 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.519 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:04.519 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.519 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.519 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.519 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.519 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:04.519 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.086 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.345 00:17:05.345 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.345 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.345 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.603 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.603 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.603 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.603 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.603 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.603 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.603 { 00:17:05.603 "cntlid": 103, 00:17:05.603 "qid": 0, 00:17:05.603 "state": "enabled", 00:17:05.603 "thread": "nvmf_tgt_poll_group_000", 00:17:05.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:05.603 "listen_address": { 00:17:05.603 "trtype": "TCP", 00:17:05.603 "adrfam": "IPv4", 00:17:05.603 "traddr": "10.0.0.2", 00:17:05.603 "trsvcid": "4420" 00:17:05.603 }, 00:17:05.603 "peer_address": { 00:17:05.603 "trtype": "TCP", 00:17:05.603 "adrfam": "IPv4", 00:17:05.603 "traddr": "10.0.0.1", 00:17:05.603 "trsvcid": "39076" 00:17:05.603 }, 00:17:05.603 "auth": { 00:17:05.603 "state": "completed", 00:17:05.603 "digest": "sha512", 00:17:05.603 "dhgroup": "null" 00:17:05.603 } 00:17:05.603 } 00:17:05.603 ]' 00:17:05.603 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.603 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.603 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.603 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:05.603 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.861 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.861 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.861 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.119 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:17:06.119 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:17:07.054 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.054 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:07.054 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.054 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.054 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.054 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.054 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.054 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.054 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.312 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.571 00:17:07.572 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.572 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.572 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.830 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.830 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.830 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.830 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.830 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.830 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.830 { 00:17:07.830 "cntlid": 105, 00:17:07.830 "qid": 0, 00:17:07.830 "state": "enabled", 00:17:07.830 "thread": "nvmf_tgt_poll_group_000", 00:17:07.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:07.830 "listen_address": { 00:17:07.830 "trtype": "TCP", 00:17:07.830 "adrfam": "IPv4", 00:17:07.830 "traddr": "10.0.0.2", 00:17:07.830 "trsvcid": "4420" 00:17:07.830 }, 00:17:07.830 "peer_address": { 00:17:07.830 "trtype": "TCP", 00:17:07.830 "adrfam": "IPv4", 00:17:07.830 "traddr": "10.0.0.1", 00:17:07.830 "trsvcid": "39088" 00:17:07.830 }, 00:17:07.830 "auth": { 00:17:07.830 "state": "completed", 00:17:07.830 "digest": "sha512", 00:17:07.830 "dhgroup": "ffdhe2048" 00:17:07.830 } 00:17:07.830 } 00:17:07.830 ]' 00:17:07.830 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.830 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.830 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.089 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.089 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.089 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.089 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.089 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.385 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:17:08.385 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:17:09.350 16:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.350 16:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:09.350 16:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.350 16:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.350 16:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.350 16:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.350 16:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.350 16:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.610 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.868 00:17:09.868 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.868 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.868 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.127 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.127 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.127 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.127 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.127 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.127 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.127 { 00:17:10.127 "cntlid": 107, 00:17:10.127 "qid": 0, 00:17:10.127 "state": "enabled", 00:17:10.127 "thread": "nvmf_tgt_poll_group_000", 00:17:10.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:10.127 "listen_address": { 00:17:10.127 "trtype": "TCP", 00:17:10.127 "adrfam": "IPv4", 00:17:10.127 "traddr": "10.0.0.2", 00:17:10.127 "trsvcid": "4420" 00:17:10.127 }, 00:17:10.127 "peer_address": { 00:17:10.127 "trtype": "TCP", 00:17:10.127 "adrfam": "IPv4", 00:17:10.127 "traddr": "10.0.0.1", 00:17:10.127 "trsvcid": "38826" 00:17:10.127 }, 00:17:10.127 "auth": { 00:17:10.127 "state": "completed", 00:17:10.127 "digest": "sha512", 00:17:10.127 "dhgroup": "ffdhe2048" 00:17:10.127 } 00:17:10.127 } 00:17:10.127 ]' 00:17:10.127 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.386 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.386 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.386 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.386 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.386 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.386 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.386 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.645 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:17:10.645 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:17:11.587 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.588 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:11.588 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.588 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.588 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.588 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.588 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.588 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.846 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:11.846 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.846 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.846 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:11.846 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.846 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.846 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.866 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.866 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.866 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.867 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.867 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.867 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.434 00:17:12.434 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.434 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.434 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.434 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.434 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.434 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.434 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.434 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.434 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.434 { 00:17:12.434 "cntlid": 109, 00:17:12.434 "qid": 0, 00:17:12.434 "state": "enabled", 00:17:12.434 "thread": "nvmf_tgt_poll_group_000", 00:17:12.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:12.434 "listen_address": { 00:17:12.434 "trtype": "TCP", 00:17:12.434 "adrfam": "IPv4", 00:17:12.434 "traddr": "10.0.0.2", 00:17:12.434 "trsvcid": "4420" 00:17:12.434 }, 00:17:12.434 "peer_address": { 00:17:12.434 "trtype": "TCP", 00:17:12.434 "adrfam": "IPv4", 00:17:12.434 "traddr": "10.0.0.1", 00:17:12.434 "trsvcid": "38860" 00:17:12.434 }, 00:17:12.434 "auth": { 00:17:12.434 "state": "completed", 00:17:12.434 "digest": "sha512", 00:17:12.434 "dhgroup": "ffdhe2048" 00:17:12.434 } 00:17:12.434 } 00:17:12.434 ]' 00:17:12.434 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.693 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.693 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.693 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.693 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.693 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.693 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.693 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.951 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:17:12.951 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:17:13.889 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.889 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:13.889 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.889 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.889 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.889 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.889 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.889 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.148 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.713 00:17:14.713 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.713 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.713 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.972 { 00:17:14.972 "cntlid": 111, 00:17:14.972 "qid": 0, 00:17:14.972 "state": "enabled", 00:17:14.972 "thread": "nvmf_tgt_poll_group_000", 00:17:14.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:14.972 "listen_address": { 00:17:14.972 "trtype": "TCP", 00:17:14.972 "adrfam": "IPv4", 00:17:14.972 "traddr": "10.0.0.2", 00:17:14.972 "trsvcid": "4420" 00:17:14.972 }, 00:17:14.972 "peer_address": { 00:17:14.972 "trtype": "TCP", 00:17:14.972 "adrfam": "IPv4", 00:17:14.972 "traddr": "10.0.0.1", 00:17:14.972 "trsvcid": "38890" 00:17:14.972 }, 00:17:14.972 "auth": { 00:17:14.972 "state": "completed", 00:17:14.972 "digest": "sha512", 00:17:14.972 "dhgroup": "ffdhe2048" 00:17:14.972 } 00:17:14.972 } 00:17:14.972 ]' 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.972 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.230 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:17:15.230 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:17:16.166 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.166 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:16.166 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.166 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.166 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.166 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.166 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.166 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.166 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.732 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:16.732 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.732 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.732 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:16.732 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.732 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.732 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.732 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.732 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.732 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.732 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.732 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.733 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.991 00:17:16.991 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.991 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.991 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.249 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.249 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.249 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.249 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.249 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.249 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.249 { 00:17:17.249 "cntlid": 113, 00:17:17.249 "qid": 0, 00:17:17.249 "state": "enabled", 00:17:17.249 "thread": "nvmf_tgt_poll_group_000", 00:17:17.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:17.249 "listen_address": { 00:17:17.249 "trtype": "TCP", 00:17:17.249 "adrfam": "IPv4", 00:17:17.249 "traddr": "10.0.0.2", 00:17:17.249 "trsvcid": "4420" 00:17:17.249 }, 00:17:17.249 "peer_address": { 00:17:17.249 "trtype": "TCP", 00:17:17.249 "adrfam": "IPv4", 00:17:17.249 "traddr": "10.0.0.1", 00:17:17.249 "trsvcid": "38916" 00:17:17.249 }, 00:17:17.249 "auth": { 00:17:17.249 "state": "completed", 00:17:17.249 "digest": "sha512", 00:17:17.249 "dhgroup": "ffdhe3072" 00:17:17.249 } 00:17:17.249 } 00:17:17.249 ]' 00:17:17.249 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.249 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.249 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.508 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.508 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.508 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.508 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.508 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.766 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:17:17.766 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:17:18.709 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.709 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.709 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.709 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.709 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.709 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.709 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.709 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.971 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:18.972 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.972 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.972 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.972 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:18.972 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.972 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.972 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.972 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.972 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.972 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.972 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.972 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.230 00:17:19.230 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.230 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.230 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.489 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.489 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.489 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.489 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.748 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.748 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.748 { 00:17:19.748 "cntlid": 115, 00:17:19.748 "qid": 0, 00:17:19.748 "state": "enabled", 00:17:19.748 "thread": "nvmf_tgt_poll_group_000", 00:17:19.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:19.748 "listen_address": { 00:17:19.748 "trtype": "TCP", 00:17:19.748 "adrfam": "IPv4", 00:17:19.748 "traddr": "10.0.0.2", 00:17:19.748 "trsvcid": "4420" 00:17:19.748 }, 00:17:19.748 "peer_address": { 00:17:19.748 "trtype": "TCP", 00:17:19.748 "adrfam": "IPv4", 00:17:19.748 "traddr": "10.0.0.1", 00:17:19.748 "trsvcid": "38938" 00:17:19.748 }, 00:17:19.748 "auth": { 00:17:19.748 "state": "completed", 00:17:19.748 "digest": "sha512", 00:17:19.748 "dhgroup": "ffdhe3072" 00:17:19.748 } 00:17:19.748 } 00:17:19.748 ]' 00:17:19.748 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.748 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.748 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.748 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.748 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.748 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.748 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.748 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.006 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:17:20.006 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:17:20.955 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.955 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:20.955 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.955 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.955 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.955 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.955 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:20.955 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:21.214 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:21.214 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.214 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.215 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:21.215 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.215 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.215 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.215 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.215 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.473 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.473 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.473 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.473 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.732 00:17:21.732 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.732 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.732 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.990 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.990 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.990 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.990 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.990 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.990 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.990 { 00:17:21.990 "cntlid": 117, 00:17:21.990 "qid": 0, 00:17:21.990 "state": "enabled", 00:17:21.990 "thread": "nvmf_tgt_poll_group_000", 00:17:21.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:21.990 "listen_address": { 00:17:21.990 "trtype": "TCP", 00:17:21.990 "adrfam": "IPv4", 00:17:21.990 "traddr": "10.0.0.2", 00:17:21.990 "trsvcid": "4420" 00:17:21.990 }, 00:17:21.990 "peer_address": { 00:17:21.990 "trtype": "TCP", 00:17:21.990 "adrfam": "IPv4", 00:17:21.990 "traddr": "10.0.0.1", 00:17:21.990 "trsvcid": "58248" 00:17:21.990 }, 00:17:21.990 "auth": { 00:17:21.990 "state": "completed", 00:17:21.990 "digest": "sha512", 00:17:21.990 "dhgroup": "ffdhe3072" 00:17:21.990 } 00:17:21.990 } 00:17:21.990 ]' 00:17:21.990 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.990 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.990 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.990 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:21.990 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.248 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.248 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.248 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.506 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:17:22.506 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:17:23.444 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.445 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:23.445 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.445 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.445 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.445 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.445 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.445 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.703 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.272 00:17:24.272 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.272 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.272 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.530 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.530 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.530 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.530 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.530 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.530 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.530 { 00:17:24.530 "cntlid": 119, 00:17:24.530 "qid": 0, 00:17:24.530 "state": "enabled", 00:17:24.530 "thread": "nvmf_tgt_poll_group_000", 00:17:24.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:24.530 "listen_address": { 00:17:24.530 "trtype": "TCP", 00:17:24.530 "adrfam": "IPv4", 00:17:24.530 "traddr": "10.0.0.2", 00:17:24.530 "trsvcid": "4420" 00:17:24.530 }, 00:17:24.530 "peer_address": { 00:17:24.530 "trtype": "TCP", 00:17:24.530 "adrfam": "IPv4", 00:17:24.530 "traddr": "10.0.0.1", 00:17:24.530 "trsvcid": "58276" 00:17:24.530 }, 00:17:24.530 "auth": { 00:17:24.530 "state": "completed", 00:17:24.530 "digest": "sha512", 00:17:24.530 "dhgroup": "ffdhe3072" 00:17:24.530 } 00:17:24.530 } 00:17:24.530 ]' 00:17:24.530 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.530 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.530 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.530 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.530 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.530 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.530 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.530 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.789 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:17:24.789 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:17:25.727 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.727 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.727 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.727 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.727 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.727 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.727 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.727 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.727 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.986 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.554 00:17:26.554 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.554 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.554 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.812 { 00:17:26.812 "cntlid": 121, 00:17:26.812 "qid": 0, 00:17:26.812 "state": "enabled", 00:17:26.812 "thread": "nvmf_tgt_poll_group_000", 00:17:26.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:26.812 "listen_address": { 00:17:26.812 "trtype": "TCP", 00:17:26.812 "adrfam": "IPv4", 00:17:26.812 "traddr": "10.0.0.2", 00:17:26.812 "trsvcid": "4420" 00:17:26.812 }, 00:17:26.812 "peer_address": { 00:17:26.812 "trtype": "TCP", 00:17:26.812 "adrfam": "IPv4", 00:17:26.812 "traddr": "10.0.0.1", 00:17:26.812 "trsvcid": "58318" 00:17:26.812 }, 00:17:26.812 "auth": { 00:17:26.812 "state": "completed", 00:17:26.812 "digest": "sha512", 00:17:26.812 "dhgroup": "ffdhe4096" 00:17:26.812 } 00:17:26.812 } 00:17:26.812 ]' 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.812 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.380 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:17:27.381 16:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:17:28.321 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.321 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:28.321 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.321 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.321 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.321 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.321 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.321 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.580 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.839 00:17:28.839 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.839 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.839 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.098 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.098 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.098 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.098 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.098 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.098 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.098 { 00:17:29.098 "cntlid": 123, 00:17:29.098 "qid": 0, 00:17:29.098 "state": "enabled", 00:17:29.098 "thread": "nvmf_tgt_poll_group_000", 00:17:29.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:29.098 "listen_address": { 00:17:29.098 "trtype": "TCP", 00:17:29.098 "adrfam": "IPv4", 00:17:29.098 "traddr": "10.0.0.2", 00:17:29.098 "trsvcid": "4420" 00:17:29.098 }, 00:17:29.098 "peer_address": { 00:17:29.098 "trtype": "TCP", 00:17:29.098 "adrfam": "IPv4", 00:17:29.098 "traddr": "10.0.0.1", 00:17:29.098 "trsvcid": "58356" 00:17:29.098 }, 00:17:29.098 "auth": { 00:17:29.098 "state": "completed", 00:17:29.098 "digest": "sha512", 00:17:29.098 "dhgroup": "ffdhe4096" 00:17:29.098 } 00:17:29.098 } 00:17:29.098 ]' 00:17:29.098 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.356 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.356 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.356 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:29.356 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.356 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.356 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.356 16:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.614 16:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:17:29.614 16:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:17:30.554 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.554 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:30.554 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.554 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.554 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.554 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.554 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:30.554 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.812 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.379 00:17:31.379 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.379 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.379 16:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.638 { 00:17:31.638 "cntlid": 125, 00:17:31.638 "qid": 0, 00:17:31.638 "state": "enabled", 00:17:31.638 "thread": "nvmf_tgt_poll_group_000", 00:17:31.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:31.638 "listen_address": { 00:17:31.638 "trtype": "TCP", 00:17:31.638 "adrfam": "IPv4", 00:17:31.638 "traddr": "10.0.0.2", 00:17:31.638 "trsvcid": "4420" 00:17:31.638 }, 00:17:31.638 "peer_address": { 00:17:31.638 "trtype": "TCP", 00:17:31.638 "adrfam": "IPv4", 00:17:31.638 "traddr": "10.0.0.1", 00:17:31.638 "trsvcid": "33646" 00:17:31.638 }, 00:17:31.638 "auth": { 00:17:31.638 "state": "completed", 00:17:31.638 "digest": "sha512", 00:17:31.638 "dhgroup": "ffdhe4096" 00:17:31.638 } 00:17:31.638 } 00:17:31.638 ]' 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.638 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.896 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:17:31.896 16:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:17:32.830 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.830 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.830 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.830 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.830 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.830 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.830 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:32.830 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.088 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.657 00:17:33.657 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.657 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.657 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.923 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.923 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.923 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.923 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.923 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.923 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.923 { 00:17:33.923 "cntlid": 127, 00:17:33.923 "qid": 0, 00:17:33.923 "state": "enabled", 00:17:33.923 "thread": "nvmf_tgt_poll_group_000", 00:17:33.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:33.923 "listen_address": { 00:17:33.923 "trtype": "TCP", 00:17:33.923 "adrfam": "IPv4", 00:17:33.923 "traddr": "10.0.0.2", 00:17:33.923 "trsvcid": "4420" 00:17:33.923 }, 00:17:33.923 "peer_address": { 00:17:33.923 "trtype": "TCP", 00:17:33.923 "adrfam": "IPv4", 00:17:33.923 "traddr": "10.0.0.1", 00:17:33.923 "trsvcid": "33662" 00:17:33.923 }, 00:17:33.923 "auth": { 00:17:33.923 "state": "completed", 00:17:33.923 "digest": "sha512", 00:17:33.923 "dhgroup": "ffdhe4096" 00:17:33.923 } 00:17:33.923 } 00:17:33.923 ]' 00:17:33.923 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.923 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.923 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.923 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.923 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.240 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.240 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.240 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.240 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:17:34.240 16:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:17:35.202 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.202 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:35.202 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.202 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.472 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.472 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.472 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.472 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:35.472 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:35.733 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:35.733 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.734 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.734 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:35.734 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.734 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.734 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.734 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.734 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.734 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.734 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.734 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.734 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.297 00:17:36.297 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.297 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.297 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.555 { 00:17:36.555 "cntlid": 129, 00:17:36.555 "qid": 0, 00:17:36.555 "state": "enabled", 00:17:36.555 "thread": "nvmf_tgt_poll_group_000", 00:17:36.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:36.555 "listen_address": { 00:17:36.555 "trtype": "TCP", 00:17:36.555 "adrfam": "IPv4", 00:17:36.555 "traddr": "10.0.0.2", 00:17:36.555 "trsvcid": "4420" 00:17:36.555 }, 00:17:36.555 "peer_address": { 00:17:36.555 "trtype": "TCP", 00:17:36.555 "adrfam": "IPv4", 00:17:36.555 "traddr": "10.0.0.1", 00:17:36.555 "trsvcid": "33676" 00:17:36.555 }, 00:17:36.555 "auth": { 00:17:36.555 "state": "completed", 00:17:36.555 "digest": "sha512", 00:17:36.555 "dhgroup": "ffdhe6144" 00:17:36.555 } 00:17:36.555 } 00:17:36.555 ]' 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.555 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.812 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:17:36.812 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:17:37.744 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.744 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.744 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.744 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.744 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.744 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.744 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:37.744 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.003 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.568 00:17:38.568 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.568 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.568 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.826 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.826 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.826 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.826 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.084 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.084 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.084 { 00:17:39.084 "cntlid": 131, 00:17:39.084 "qid": 0, 00:17:39.084 "state": "enabled", 00:17:39.084 "thread": "nvmf_tgt_poll_group_000", 00:17:39.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:39.084 "listen_address": { 00:17:39.084 "trtype": "TCP", 00:17:39.084 "adrfam": "IPv4", 00:17:39.084 "traddr": "10.0.0.2", 00:17:39.084 "trsvcid": "4420" 00:17:39.084 }, 00:17:39.084 "peer_address": { 00:17:39.084 "trtype": "TCP", 00:17:39.084 "adrfam": "IPv4", 00:17:39.084 "traddr": "10.0.0.1", 00:17:39.084 "trsvcid": "33706" 00:17:39.084 }, 00:17:39.084 "auth": { 00:17:39.084 "state": "completed", 00:17:39.084 "digest": "sha512", 00:17:39.084 "dhgroup": "ffdhe6144" 00:17:39.084 } 00:17:39.084 } 00:17:39.084 ]' 00:17:39.084 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.084 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.084 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.084 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.084 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.084 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.084 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.084 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.342 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:17:39.342 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:17:40.275 16:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.275 16:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:40.275 16:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.275 16:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.275 16:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.275 16:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.275 16:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:40.275 16:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:40.533 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:40.533 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.533 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.533 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:40.533 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.533 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.533 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.791 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.791 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.791 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.791 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.791 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.791 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.357 00:17:41.357 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.357 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.357 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.615 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.615 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.615 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.615 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.615 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.615 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.615 { 00:17:41.616 "cntlid": 133, 00:17:41.616 "qid": 0, 00:17:41.616 "state": "enabled", 00:17:41.616 "thread": "nvmf_tgt_poll_group_000", 00:17:41.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:41.616 "listen_address": { 00:17:41.616 "trtype": "TCP", 00:17:41.616 "adrfam": "IPv4", 00:17:41.616 "traddr": "10.0.0.2", 00:17:41.616 "trsvcid": "4420" 00:17:41.616 }, 00:17:41.616 "peer_address": { 00:17:41.616 "trtype": "TCP", 00:17:41.616 "adrfam": "IPv4", 00:17:41.616 "traddr": "10.0.0.1", 00:17:41.616 "trsvcid": "35676" 00:17:41.616 }, 00:17:41.616 "auth": { 00:17:41.616 "state": "completed", 00:17:41.616 "digest": "sha512", 00:17:41.616 "dhgroup": "ffdhe6144" 00:17:41.616 } 00:17:41.616 } 00:17:41.616 ]' 00:17:41.616 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.616 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.616 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.616 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.616 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.616 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.616 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.616 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.874 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:17:41.874 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:17:42.808 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.808 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:42.808 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.808 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.808 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.808 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.808 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:42.808 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.066 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.631 00:17:43.631 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.631 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.631 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.889 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.889 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.889 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.889 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.146 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.146 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.146 { 00:17:44.146 "cntlid": 135, 00:17:44.146 "qid": 0, 00:17:44.146 "state": "enabled", 00:17:44.146 "thread": "nvmf_tgt_poll_group_000", 00:17:44.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:44.146 "listen_address": { 00:17:44.146 "trtype": "TCP", 00:17:44.146 "adrfam": "IPv4", 00:17:44.146 "traddr": "10.0.0.2", 00:17:44.146 "trsvcid": "4420" 00:17:44.146 }, 00:17:44.146 "peer_address": { 00:17:44.146 "trtype": "TCP", 00:17:44.146 "adrfam": "IPv4", 00:17:44.146 "traddr": "10.0.0.1", 00:17:44.146 "trsvcid": "35700" 00:17:44.146 }, 00:17:44.146 "auth": { 00:17:44.146 "state": "completed", 00:17:44.146 "digest": "sha512", 00:17:44.146 "dhgroup": "ffdhe6144" 00:17:44.146 } 00:17:44.146 } 00:17:44.146 ]' 00:17:44.146 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.146 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.146 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.146 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.146 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.147 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.147 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.147 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.404 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:17:44.404 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:17:45.338 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.338 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:45.338 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.338 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.338 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.338 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.338 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.338 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:45.338 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.596 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.530 00:17:46.530 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.530 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.530 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.789 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.789 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.789 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.789 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.789 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.789 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.789 { 00:17:46.789 "cntlid": 137, 00:17:46.789 "qid": 0, 00:17:46.789 "state": "enabled", 00:17:46.789 "thread": "nvmf_tgt_poll_group_000", 00:17:46.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:46.789 "listen_address": { 00:17:46.789 "trtype": "TCP", 00:17:46.789 "adrfam": "IPv4", 00:17:46.789 "traddr": "10.0.0.2", 00:17:46.789 "trsvcid": "4420" 00:17:46.789 }, 00:17:46.789 "peer_address": { 00:17:46.789 "trtype": "TCP", 00:17:46.789 "adrfam": "IPv4", 00:17:46.789 "traddr": "10.0.0.1", 00:17:46.789 "trsvcid": "35736" 00:17:46.789 }, 00:17:46.789 "auth": { 00:17:46.789 "state": "completed", 00:17:46.789 "digest": "sha512", 00:17:46.789 "dhgroup": "ffdhe8192" 00:17:46.789 } 00:17:46.789 } 00:17:46.789 ]' 00:17:46.789 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.789 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.789 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.046 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.046 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.046 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.046 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.046 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.305 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:17:47.305 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:17:48.238 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.238 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:48.238 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.238 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.238 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.238 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.238 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:48.238 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.495 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.428 00:17:49.428 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.428 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.428 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.686 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.686 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.686 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.687 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.687 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.687 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.687 { 00:17:49.687 "cntlid": 139, 00:17:49.687 "qid": 0, 00:17:49.687 "state": "enabled", 00:17:49.687 "thread": "nvmf_tgt_poll_group_000", 00:17:49.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:49.687 "listen_address": { 00:17:49.687 "trtype": "TCP", 00:17:49.687 "adrfam": "IPv4", 00:17:49.687 "traddr": "10.0.0.2", 00:17:49.687 "trsvcid": "4420" 00:17:49.687 }, 00:17:49.687 "peer_address": { 00:17:49.687 "trtype": "TCP", 00:17:49.687 "adrfam": "IPv4", 00:17:49.687 "traddr": "10.0.0.1", 00:17:49.687 "trsvcid": "35766" 00:17:49.687 }, 00:17:49.687 "auth": { 00:17:49.687 "state": "completed", 00:17:49.687 "digest": "sha512", 00:17:49.687 "dhgroup": "ffdhe8192" 00:17:49.687 } 00:17:49.687 } 00:17:49.687 ]' 00:17:49.687 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.687 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.687 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.944 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.944 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.944 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.944 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.944 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.202 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:17:50.202 16:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: --dhchap-ctrl-secret DHHC-1:02:OTg5OGFjMTI0ZDZiNWRlODhiYWI5ZmFlYzk3ZWM3MjhmNzdjZjdhYzQ4MjRmZDY4is4JFQ==: 00:17:51.137 16:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.137 16:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:51.137 16:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.137 16:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.137 16:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.137 16:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.137 16:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:51.137 16:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.396 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.331 00:17:52.331 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.331 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.331 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.589 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.589 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.589 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.589 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.589 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.589 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.589 { 00:17:52.589 "cntlid": 141, 00:17:52.589 "qid": 0, 00:17:52.589 "state": "enabled", 00:17:52.589 "thread": "nvmf_tgt_poll_group_000", 00:17:52.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:52.589 "listen_address": { 00:17:52.589 "trtype": "TCP", 00:17:52.589 "adrfam": "IPv4", 00:17:52.589 "traddr": "10.0.0.2", 00:17:52.589 "trsvcid": "4420" 00:17:52.589 }, 00:17:52.589 "peer_address": { 00:17:52.589 "trtype": "TCP", 00:17:52.589 "adrfam": "IPv4", 00:17:52.589 "traddr": "10.0.0.1", 00:17:52.589 "trsvcid": "54074" 00:17:52.589 }, 00:17:52.589 "auth": { 00:17:52.589 "state": "completed", 00:17:52.589 "digest": "sha512", 00:17:52.589 "dhgroup": "ffdhe8192" 00:17:52.589 } 00:17:52.589 } 00:17:52.589 ]' 00:17:52.589 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.847 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.847 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.847 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.847 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.847 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.847 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.847 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.105 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:17:53.105 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:01:NzcyOWUyNTA1YWY0MzhmYmY2MmVlOTE4YWFhODM2Y2EkngmF: 00:17:54.042 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.042 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:54.042 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.042 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.042 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.042 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.042 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.042 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.301 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.234 00:17:55.234 16:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.234 16:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.234 16:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.493 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.493 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.493 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.493 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.493 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.493 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.493 { 00:17:55.493 "cntlid": 143, 00:17:55.493 "qid": 0, 00:17:55.493 "state": "enabled", 00:17:55.493 "thread": "nvmf_tgt_poll_group_000", 00:17:55.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:55.493 "listen_address": { 00:17:55.493 "trtype": "TCP", 00:17:55.493 "adrfam": "IPv4", 00:17:55.493 "traddr": "10.0.0.2", 00:17:55.493 "trsvcid": "4420" 00:17:55.493 }, 00:17:55.493 "peer_address": { 00:17:55.493 "trtype": "TCP", 00:17:55.493 "adrfam": "IPv4", 00:17:55.493 "traddr": "10.0.0.1", 00:17:55.493 "trsvcid": "54122" 00:17:55.493 }, 00:17:55.493 "auth": { 00:17:55.493 "state": "completed", 00:17:55.493 "digest": "sha512", 00:17:55.493 "dhgroup": "ffdhe8192" 00:17:55.493 } 00:17:55.493 } 00:17:55.493 ]' 00:17:55.493 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.493 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.493 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.493 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.493 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.752 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.752 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.752 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.010 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:17:56.010 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:17:56.945 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.945 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:56.945 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.945 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.945 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.945 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:56.945 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:56.945 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:56.945 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:56.945 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:56.945 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.203 16:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.164 00:17:58.164 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.164 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.164 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.422 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.422 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.422 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.422 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.422 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.422 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.422 { 00:17:58.422 "cntlid": 145, 00:17:58.422 "qid": 0, 00:17:58.422 "state": "enabled", 00:17:58.422 "thread": "nvmf_tgt_poll_group_000", 00:17:58.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:58.422 "listen_address": { 00:17:58.422 "trtype": "TCP", 00:17:58.422 "adrfam": "IPv4", 00:17:58.422 "traddr": "10.0.0.2", 00:17:58.422 "trsvcid": "4420" 00:17:58.422 }, 00:17:58.422 "peer_address": { 00:17:58.422 "trtype": "TCP", 00:17:58.422 "adrfam": "IPv4", 00:17:58.422 "traddr": "10.0.0.1", 00:17:58.422 "trsvcid": "54150" 00:17:58.422 }, 00:17:58.422 "auth": { 00:17:58.422 "state": "completed", 00:17:58.422 "digest": "sha512", 00:17:58.422 "dhgroup": "ffdhe8192" 00:17:58.422 } 00:17:58.422 } 00:17:58.422 ]' 00:17:58.422 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.422 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.422 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.422 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.422 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.680 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.680 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.680 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.938 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:17:58.938 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ODRlZmI4YzhlMDEyOTg2MTc5MDcxMDU2N2EzYzRkYWU2NmFiZjMxODQ5NDAzMmIypcBryw==: --dhchap-ctrl-secret DHHC-1:03:NWI5MGUwYmVkNTdkMzMwNzY3NGNmYWI3NTg5N2VmYjk4NjkwOTRhNzVlZWJlNzlmN2UzM2RkMjU4MGQ4YTlkMD3Xjnw=: 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:59.906 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:00.841 request: 00:18:00.841 { 00:18:00.841 "name": "nvme0", 00:18:00.841 "trtype": "tcp", 00:18:00.841 "traddr": "10.0.0.2", 00:18:00.841 "adrfam": "ipv4", 00:18:00.841 "trsvcid": "4420", 00:18:00.841 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:00.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:00.841 "prchk_reftag": false, 00:18:00.841 "prchk_guard": false, 00:18:00.841 "hdgst": false, 00:18:00.841 "ddgst": false, 00:18:00.841 "dhchap_key": "key2", 00:18:00.841 "allow_unrecognized_csi": false, 00:18:00.841 "method": "bdev_nvme_attach_controller", 00:18:00.841 "req_id": 1 00:18:00.841 } 00:18:00.841 Got JSON-RPC error response 00:18:00.841 response: 00:18:00.841 { 00:18:00.841 "code": -5, 00:18:00.841 "message": "Input/output error" 00:18:00.841 } 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:00.841 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:01.776 request: 00:18:01.776 { 00:18:01.776 "name": "nvme0", 00:18:01.776 "trtype": "tcp", 00:18:01.776 "traddr": "10.0.0.2", 00:18:01.776 "adrfam": "ipv4", 00:18:01.776 "trsvcid": "4420", 00:18:01.776 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:01.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:01.776 "prchk_reftag": false, 00:18:01.776 "prchk_guard": false, 00:18:01.776 "hdgst": false, 00:18:01.776 "ddgst": false, 00:18:01.776 "dhchap_key": "key1", 00:18:01.776 "dhchap_ctrlr_key": "ckey2", 00:18:01.776 "allow_unrecognized_csi": false, 00:18:01.776 "method": "bdev_nvme_attach_controller", 00:18:01.776 "req_id": 1 00:18:01.776 } 00:18:01.776 Got JSON-RPC error response 00:18:01.776 response: 00:18:01.776 { 00:18:01.776 "code": -5, 00:18:01.776 "message": "Input/output error" 00:18:01.776 } 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.776 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.342 request: 00:18:02.342 { 00:18:02.342 "name": "nvme0", 00:18:02.342 "trtype": "tcp", 00:18:02.342 "traddr": "10.0.0.2", 00:18:02.342 "adrfam": "ipv4", 00:18:02.342 "trsvcid": "4420", 00:18:02.342 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:02.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:02.342 "prchk_reftag": false, 00:18:02.342 "prchk_guard": false, 00:18:02.342 "hdgst": false, 00:18:02.342 "ddgst": false, 00:18:02.342 "dhchap_key": "key1", 00:18:02.342 "dhchap_ctrlr_key": "ckey1", 00:18:02.342 "allow_unrecognized_csi": false, 00:18:02.342 "method": "bdev_nvme_attach_controller", 00:18:02.342 "req_id": 1 00:18:02.342 } 00:18:02.342 Got JSON-RPC error response 00:18:02.342 response: 00:18:02.342 { 00:18:02.342 "code": -5, 00:18:02.342 "message": "Input/output error" 00:18:02.342 } 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2339503 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2339503 ']' 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2339503 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:02.342 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2339503 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2339503' 00:18:02.601 killing process with pid 2339503 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2339503 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2339503 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=2362851 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 2362851 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2362851 ']' 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.601 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.860 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.860 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:02.860 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:02.860 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.860 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.118 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.118 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:03.118 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2362851 00:18:03.118 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2362851 ']' 00:18:03.118 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.118 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.118 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.118 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.118 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.377 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:03.377 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:03.377 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.377 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 null0 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uI7 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.kSW ]] 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kSW 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.wgd 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.P2Z ]] 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.P2Z 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.LOE 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Ehf ]] 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ehf 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.377 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.MT3 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.636 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.011 nvme0n1 00:18:05.011 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.011 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.011 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.269 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.269 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.269 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.269 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.269 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.269 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.269 { 00:18:05.269 "cntlid": 1, 00:18:05.269 "qid": 0, 00:18:05.269 "state": "enabled", 00:18:05.269 "thread": "nvmf_tgt_poll_group_000", 00:18:05.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:05.269 "listen_address": { 00:18:05.269 "trtype": "TCP", 00:18:05.269 "adrfam": "IPv4", 00:18:05.269 "traddr": "10.0.0.2", 00:18:05.269 "trsvcid": "4420" 00:18:05.269 }, 00:18:05.269 "peer_address": { 00:18:05.269 "trtype": "TCP", 00:18:05.269 "adrfam": "IPv4", 00:18:05.269 "traddr": "10.0.0.1", 00:18:05.269 "trsvcid": "56968" 00:18:05.269 }, 00:18:05.269 "auth": { 00:18:05.269 "state": "completed", 00:18:05.269 "digest": "sha512", 00:18:05.269 "dhgroup": "ffdhe8192" 00:18:05.269 } 00:18:05.269 } 00:18:05.269 ]' 00:18:05.269 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.269 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.269 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.527 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.527 16:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.527 16:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.527 16:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.527 16:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.786 16:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:18:05.786 16:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:18:06.720 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.720 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:06.720 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.720 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.720 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.720 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:06.720 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.720 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.720 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.720 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:06.720 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:06.979 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:06.979 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:06.979 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:06.979 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:06.979 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.979 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:06.979 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.979 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.979 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.979 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.544 request: 00:18:07.544 { 00:18:07.544 "name": "nvme0", 00:18:07.544 "trtype": "tcp", 00:18:07.544 "traddr": "10.0.0.2", 00:18:07.544 "adrfam": "ipv4", 00:18:07.544 "trsvcid": "4420", 00:18:07.544 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:07.544 "prchk_reftag": false, 00:18:07.544 "prchk_guard": false, 00:18:07.544 "hdgst": false, 00:18:07.544 "ddgst": false, 00:18:07.544 "dhchap_key": "key3", 00:18:07.544 "allow_unrecognized_csi": false, 00:18:07.544 "method": "bdev_nvme_attach_controller", 00:18:07.544 "req_id": 1 00:18:07.544 } 00:18:07.544 Got JSON-RPC error response 00:18:07.544 response: 00:18:07.544 { 00:18:07.544 "code": -5, 00:18:07.544 "message": "Input/output error" 00:18:07.544 } 00:18:07.544 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:07.544 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:07.545 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:07.545 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:07.545 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:07.545 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:07.545 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:07.545 16:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:07.545 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:07.545 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:07.545 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:07.545 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:07.545 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.545 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:07.545 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.545 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:07.545 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.545 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.803 request: 00:18:07.803 { 00:18:07.803 "name": "nvme0", 00:18:07.803 "trtype": "tcp", 00:18:07.803 "traddr": "10.0.0.2", 00:18:07.803 "adrfam": "ipv4", 00:18:07.803 "trsvcid": "4420", 00:18:07.803 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:07.803 "prchk_reftag": false, 00:18:07.803 "prchk_guard": false, 00:18:07.803 "hdgst": false, 00:18:07.803 "ddgst": false, 00:18:07.803 "dhchap_key": "key3", 00:18:07.803 "allow_unrecognized_csi": false, 00:18:07.803 "method": "bdev_nvme_attach_controller", 00:18:07.803 "req_id": 1 00:18:07.803 } 00:18:07.803 Got JSON-RPC error response 00:18:07.803 response: 00:18:07.803 { 00:18:07.803 "code": -5, 00:18:07.803 "message": "Input/output error" 00:18:07.803 } 00:18:08.061 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:08.061 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:08.061 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:08.061 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:08.061 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:08.061 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:08.061 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:08.061 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:08.061 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:08.061 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:08.320 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:08.887 request: 00:18:08.887 { 00:18:08.887 "name": "nvme0", 00:18:08.887 "trtype": "tcp", 00:18:08.887 "traddr": "10.0.0.2", 00:18:08.887 "adrfam": "ipv4", 00:18:08.887 "trsvcid": "4420", 00:18:08.887 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:08.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:08.887 "prchk_reftag": false, 00:18:08.887 "prchk_guard": false, 00:18:08.887 "hdgst": false, 00:18:08.887 "ddgst": false, 00:18:08.887 "dhchap_key": "key0", 00:18:08.887 "dhchap_ctrlr_key": "key1", 00:18:08.887 "allow_unrecognized_csi": false, 00:18:08.887 "method": "bdev_nvme_attach_controller", 00:18:08.887 "req_id": 1 00:18:08.887 } 00:18:08.887 Got JSON-RPC error response 00:18:08.887 response: 00:18:08.887 { 00:18:08.887 "code": -5, 00:18:08.887 "message": "Input/output error" 00:18:08.887 } 00:18:08.887 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:08.887 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:08.887 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:08.887 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:08.887 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:08.887 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:08.887 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:09.145 nvme0n1 00:18:09.145 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:09.145 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:09.145 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.404 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.404 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.404 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.662 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:18:09.662 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.662 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.662 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.662 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:09.662 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:09.662 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:11.562 nvme0n1 00:18:11.562 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:11.562 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.562 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:11.562 16:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.562 16:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:11.562 16:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.562 16:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.562 16:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.562 16:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:11.562 16:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:11.562 16:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.821 16:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.821 16:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:18:11.821 16:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: --dhchap-ctrl-secret DHHC-1:03:ZDljMDc1M2U1NGRhNjc0NmIwZDdmOGJmM2FlNjBjODgwM2ZjNGI0NzFlYTM4ZDIyNjQwN2M4YjA5M2RiYjNiN1LSBB4=: 00:18:12.754 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:12.754 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:12.754 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:12.754 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:12.754 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:12.754 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:12.754 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:12.755 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.755 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.013 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:13.013 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:13.013 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:13.013 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:13.013 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.013 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:13.013 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.013 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:13.013 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:13.013 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:13.946 request: 00:18:13.946 { 00:18:13.946 "name": "nvme0", 00:18:13.946 "trtype": "tcp", 00:18:13.946 "traddr": "10.0.0.2", 00:18:13.946 "adrfam": "ipv4", 00:18:13.946 "trsvcid": "4420", 00:18:13.946 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:13.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:13.946 "prchk_reftag": false, 00:18:13.946 "prchk_guard": false, 00:18:13.946 "hdgst": false, 00:18:13.946 "ddgst": false, 00:18:13.946 "dhchap_key": "key1", 00:18:13.946 "allow_unrecognized_csi": false, 00:18:13.946 "method": "bdev_nvme_attach_controller", 00:18:13.946 "req_id": 1 00:18:13.946 } 00:18:13.946 Got JSON-RPC error response 00:18:13.946 response: 00:18:13.946 { 00:18:13.946 "code": -5, 00:18:13.946 "message": "Input/output error" 00:18:13.946 } 00:18:13.946 16:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:13.946 16:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:13.946 16:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:13.946 16:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:13.946 16:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:13.946 16:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:13.946 16:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:15.320 nvme0n1 00:18:15.320 16:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:15.320 16:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.320 16:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:15.886 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.886 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.886 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.144 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:16.144 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.144 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.144 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.144 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:16.144 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:16.144 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:16.403 nvme0n1 00:18:16.403 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:16.403 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:16.403 16:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.659 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.659 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.659 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.917 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:16.917 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.917 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.917 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.917 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: '' 2s 00:18:16.917 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:16.917 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:16.917 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: 00:18:16.917 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:17.175 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:17.175 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:17.175 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: ]] 00:18:17.175 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MDFiODRiZDhjZDk1MGQ4OTdmYTQxMTMzNWM5NThmNjFtPbYi: 00:18:17.175 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:17.175 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:17.176 16:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: 2s 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: 00:18:19.074 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:19.075 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:19.075 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:19.075 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: ]] 00:18:19.075 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NWNjN2U1MWVlNGZmMWE5OWNjMzcyMmI5OWU2MTU0MzE5NmM3NWNiZjk4NDg0ZTgwLxx3Uw==: 00:18:19.075 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:19.075 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:20.972 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:20.972 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:20.972 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:20.972 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:21.229 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:21.229 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:21.230 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:21.230 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.230 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.230 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.230 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.230 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.230 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:21.230 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:21.230 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:22.603 nvme0n1 00:18:22.603 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:22.603 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.603 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.603 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.603 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:22.603 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:23.543 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:23.543 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:23.543 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.803 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.803 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:23.803 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.803 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.803 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.803 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:23.803 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:24.061 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:24.061 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:24.061 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:24.319 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:25.252 request: 00:18:25.252 { 00:18:25.252 "name": "nvme0", 00:18:25.252 "dhchap_key": "key1", 00:18:25.252 "dhchap_ctrlr_key": "key3", 00:18:25.252 "method": "bdev_nvme_set_keys", 00:18:25.252 "req_id": 1 00:18:25.252 } 00:18:25.252 Got JSON-RPC error response 00:18:25.252 response: 00:18:25.252 { 00:18:25.252 "code": -13, 00:18:25.252 "message": "Permission denied" 00:18:25.252 } 00:18:25.252 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:25.252 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.252 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.252 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.252 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:25.252 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.252 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:25.510 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:25.510 16:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:26.445 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:26.445 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:26.445 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.721 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:26.721 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.721 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.721 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.989 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.989 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:26.990 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:26.990 16:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:28.365 nvme0n1 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:28.365 16:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:29.298 request: 00:18:29.298 { 00:18:29.298 "name": "nvme0", 00:18:29.298 "dhchap_key": "key2", 00:18:29.298 "dhchap_ctrlr_key": "key0", 00:18:29.298 "method": "bdev_nvme_set_keys", 00:18:29.298 "req_id": 1 00:18:29.298 } 00:18:29.298 Got JSON-RPC error response 00:18:29.298 response: 00:18:29.298 { 00:18:29.298 "code": -13, 00:18:29.298 "message": "Permission denied" 00:18:29.298 } 00:18:29.298 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:29.298 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:29.298 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:29.298 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:29.298 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:29.298 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:29.298 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.556 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:29.556 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:30.490 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:30.490 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:30.490 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.748 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:30.748 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:32.124 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:32.124 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:32.124 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.124 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:32.124 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:32.124 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:32.124 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2339524 00:18:32.124 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2339524 ']' 00:18:32.124 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2339524 00:18:32.125 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:32.125 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:32.125 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2339524 00:18:32.125 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:32.125 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:32.125 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2339524' 00:18:32.125 killing process with pid 2339524 00:18:32.125 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2339524 00:18:32.125 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2339524 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:32.692 rmmod nvme_tcp 00:18:32.692 rmmod nvme_fabrics 00:18:32.692 rmmod nvme_keyring 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 2362851 ']' 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 2362851 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2362851 ']' 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2362851 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2362851 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2362851' 00:18:32.692 killing process with pid 2362851 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2362851 00:18:32.692 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2362851 00:18:32.951 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:32.951 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:32.951 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:32.951 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:32.951 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:32.951 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:32.951 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:32.951 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:32.951 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:32.951 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.951 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.951 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.856 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:34.856 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uI7 /tmp/spdk.key-sha256.wgd /tmp/spdk.key-sha384.LOE /tmp/spdk.key-sha512.MT3 /tmp/spdk.key-sha512.kSW /tmp/spdk.key-sha384.P2Z /tmp/spdk.key-sha256.Ehf '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:34.856 00:18:34.856 real 3m41.836s 00:18:34.856 user 8m39.374s 00:18:34.856 sys 0m27.806s 00:18:34.856 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:34.856 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.856 ************************************ 00:18:34.856 END TEST nvmf_auth_target 00:18:34.856 ************************************ 00:18:34.856 16:46:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:34.856 16:46:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:34.856 16:46:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:34.856 16:46:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:34.856 16:46:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.115 ************************************ 00:18:35.115 START TEST nvmf_bdevio_no_huge 00:18:35.115 ************************************ 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:35.115 * Looking for test storage... 00:18:35.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:35.115 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:35.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.116 --rc genhtml_branch_coverage=1 00:18:35.116 --rc genhtml_function_coverage=1 00:18:35.116 --rc genhtml_legend=1 00:18:35.116 --rc geninfo_all_blocks=1 00:18:35.116 --rc geninfo_unexecuted_blocks=1 00:18:35.116 00:18:35.116 ' 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:35.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.116 --rc genhtml_branch_coverage=1 00:18:35.116 --rc genhtml_function_coverage=1 00:18:35.116 --rc genhtml_legend=1 00:18:35.116 --rc geninfo_all_blocks=1 00:18:35.116 --rc geninfo_unexecuted_blocks=1 00:18:35.116 00:18:35.116 ' 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:35.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.116 --rc genhtml_branch_coverage=1 00:18:35.116 --rc genhtml_function_coverage=1 00:18:35.116 --rc genhtml_legend=1 00:18:35.116 --rc geninfo_all_blocks=1 00:18:35.116 --rc geninfo_unexecuted_blocks=1 00:18:35.116 00:18:35.116 ' 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:35.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.116 --rc genhtml_branch_coverage=1 00:18:35.116 --rc genhtml_function_coverage=1 00:18:35.116 --rc genhtml_legend=1 00:18:35.116 --rc geninfo_all_blocks=1 00:18:35.116 --rc geninfo_unexecuted_blocks=1 00:18:35.116 00:18:35.116 ' 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:35.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:35.116 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.020 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:37.020 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:37.020 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:37.020 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:37.020 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:37.020 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:37.020 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:37.020 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:37.020 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:37.020 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:37.020 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:37.279 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:37.279 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:37.279 Found net devices under 0000:09:00.0: cvl_0_0 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:37.279 Found net devices under 0000:09:00.1: cvl_0_1 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:37.279 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:37.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:18:37.279 00:18:37.279 --- 10.0.0.2 ping statistics --- 00:18:37.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.279 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:18:37.280 00:18:37.280 --- 10.0.0.1 ping statistics --- 00:18:37.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.280 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=2368384 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 2368384 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2368384 ']' 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:37.280 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.280 [2024-10-17 16:46:50.928780] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:18:37.280 [2024-10-17 16:46:50.928856] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:37.539 [2024-10-17 16:46:50.998885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:37.539 [2024-10-17 16:46:51.056604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.539 [2024-10-17 16:46:51.056656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.539 [2024-10-17 16:46:51.056680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.539 [2024-10-17 16:46:51.056691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.539 [2024-10-17 16:46:51.056700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.539 [2024-10-17 16:46:51.057774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:37.539 [2024-10-17 16:46:51.057874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:37.539 [2024-10-17 16:46:51.057878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.539 [2024-10-17 16:46:51.057834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.539 [2024-10-17 16:46:51.214868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.539 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.798 Malloc0 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.798 [2024-10-17 16:46:51.253443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:37.798 { 00:18:37.798 "params": { 00:18:37.798 "name": "Nvme$subsystem", 00:18:37.798 "trtype": "$TEST_TRANSPORT", 00:18:37.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:37.798 "adrfam": "ipv4", 00:18:37.798 "trsvcid": "$NVMF_PORT", 00:18:37.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:37.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:37.798 "hdgst": ${hdgst:-false}, 00:18:37.798 "ddgst": ${ddgst:-false} 00:18:37.798 }, 00:18:37.798 "method": "bdev_nvme_attach_controller" 00:18:37.798 } 00:18:37.798 EOF 00:18:37.798 )") 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:18:37.798 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:37.798 "params": { 00:18:37.798 "name": "Nvme1", 00:18:37.798 "trtype": "tcp", 00:18:37.798 "traddr": "10.0.0.2", 00:18:37.798 "adrfam": "ipv4", 00:18:37.798 "trsvcid": "4420", 00:18:37.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.798 "hdgst": false, 00:18:37.798 "ddgst": false 00:18:37.798 }, 00:18:37.798 "method": "bdev_nvme_attach_controller" 00:18:37.798 }' 00:18:37.798 [2024-10-17 16:46:51.304186] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:18:37.798 [2024-10-17 16:46:51.304259] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2368409 ] 00:18:37.798 [2024-10-17 16:46:51.365817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:37.798 [2024-10-17 16:46:51.430646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.798 [2024-10-17 16:46:51.430695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.798 [2024-10-17 16:46:51.430699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.056 I/O targets: 00:18:38.056 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:38.056 00:18:38.056 00:18:38.056 CUnit - A unit testing framework for C - Version 2.1-3 00:18:38.056 http://cunit.sourceforge.net/ 00:18:38.056 00:18:38.056 00:18:38.056 Suite: bdevio tests on: Nvme1n1 00:18:38.314 Test: blockdev write read block ...passed 00:18:38.314 Test: blockdev write zeroes read block ...passed 00:18:38.314 Test: blockdev write zeroes read no split ...passed 00:18:38.314 Test: blockdev write zeroes read split ...passed 00:18:38.314 Test: blockdev write zeroes read split partial ...passed 00:18:38.314 Test: blockdev reset ...[2024-10-17 16:46:51.859047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.314 [2024-10-17 16:46:51.859179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fcc7e0 (9): Bad file descriptor 00:18:38.314 [2024-10-17 16:46:51.877392] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:38.314 passed 00:18:38.314 Test: blockdev write read 8 blocks ...passed 00:18:38.314 Test: blockdev write read size > 128k ...passed 00:18:38.314 Test: blockdev write read invalid size ...passed 00:18:38.314 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:38.314 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:38.314 Test: blockdev write read max offset ...passed 00:18:38.572 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:38.572 Test: blockdev writev readv 8 blocks ...passed 00:18:38.572 Test: blockdev writev readv 30 x 1block ...passed 00:18:38.572 Test: blockdev writev readv block ...passed 00:18:38.572 Test: blockdev writev readv size > 128k ...passed 00:18:38.572 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:38.572 Test: blockdev comparev and writev ...[2024-10-17 16:46:52.130452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.572 [2024-10-17 16:46:52.130489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.572 [2024-10-17 16:46:52.130513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.572 [2024-10-17 16:46:52.130531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.572 [2024-10-17 16:46:52.130918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.572 [2024-10-17 16:46:52.130942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:38.572 [2024-10-17 16:46:52.130965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.572 [2024-10-17 16:46:52.130989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:38.572 [2024-10-17 16:46:52.131350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.572 [2024-10-17 16:46:52.131373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:38.572 [2024-10-17 16:46:52.131395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.572 [2024-10-17 16:46:52.131411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:38.572 [2024-10-17 16:46:52.131786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.572 [2024-10-17 16:46:52.131812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:38.572 [2024-10-17 16:46:52.131835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.572 [2024-10-17 16:46:52.131852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:38.572 passed 00:18:38.572 Test: blockdev nvme passthru rw ...passed 00:18:38.572 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:46:52.214297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:38.572 [2024-10-17 16:46:52.214324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:38.572 [2024-10-17 16:46:52.214467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:38.572 [2024-10-17 16:46:52.214490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:38.572 [2024-10-17 16:46:52.214622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:38.572 [2024-10-17 16:46:52.214645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:38.572 [2024-10-17 16:46:52.214782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:38.572 [2024-10-17 16:46:52.214805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:38.572 passed 00:18:38.572 Test: blockdev nvme admin passthru ...passed 00:18:38.830 Test: blockdev copy ...passed 00:18:38.830 00:18:38.830 Run Summary: Type Total Ran Passed Failed Inactive 00:18:38.830 suites 1 1 n/a 0 0 00:18:38.830 tests 23 23 23 0 0 00:18:38.831 asserts 152 152 152 0 n/a 00:18:38.831 00:18:38.831 Elapsed time = 1.062 seconds 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:39.089 rmmod nvme_tcp 00:18:39.089 rmmod nvme_fabrics 00:18:39.089 rmmod nvme_keyring 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 2368384 ']' 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 2368384 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2368384 ']' 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2368384 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2368384 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2368384' 00:18:39.089 killing process with pid 2368384 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2368384 00:18:39.089 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2368384 00:18:39.657 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:39.657 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:39.657 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:39.657 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:39.657 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:18:39.657 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:39.657 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:18:39.657 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:39.657 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:39.657 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.657 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.657 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.564 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:41.564 00:18:41.564 real 0m6.607s 00:18:41.564 user 0m10.991s 00:18:41.564 sys 0m2.566s 00:18:41.564 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:41.564 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.564 ************************************ 00:18:41.564 END TEST nvmf_bdevio_no_huge 00:18:41.564 ************************************ 00:18:41.564 16:46:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:41.564 16:46:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:41.564 16:46:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:41.564 16:46:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.564 ************************************ 00:18:41.564 START TEST nvmf_tls 00:18:41.564 ************************************ 00:18:41.564 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:41.824 * Looking for test storage... 00:18:41.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:41.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.824 --rc genhtml_branch_coverage=1 00:18:41.824 --rc genhtml_function_coverage=1 00:18:41.824 --rc genhtml_legend=1 00:18:41.824 --rc geninfo_all_blocks=1 00:18:41.824 --rc geninfo_unexecuted_blocks=1 00:18:41.824 00:18:41.824 ' 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:41.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.824 --rc genhtml_branch_coverage=1 00:18:41.824 --rc genhtml_function_coverage=1 00:18:41.824 --rc genhtml_legend=1 00:18:41.824 --rc geninfo_all_blocks=1 00:18:41.824 --rc geninfo_unexecuted_blocks=1 00:18:41.824 00:18:41.824 ' 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:41.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.824 --rc genhtml_branch_coverage=1 00:18:41.824 --rc genhtml_function_coverage=1 00:18:41.824 --rc genhtml_legend=1 00:18:41.824 --rc geninfo_all_blocks=1 00:18:41.824 --rc geninfo_unexecuted_blocks=1 00:18:41.824 00:18:41.824 ' 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:41.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.824 --rc genhtml_branch_coverage=1 00:18:41.824 --rc genhtml_function_coverage=1 00:18:41.824 --rc genhtml_legend=1 00:18:41.824 --rc geninfo_all_blocks=1 00:18:41.824 --rc geninfo_unexecuted_blocks=1 00:18:41.824 00:18:41.824 ' 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:41.824 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:41.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:41.825 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.727 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:43.728 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:43.728 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:43.728 Found net devices under 0000:09:00.0: cvl_0_0 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:43.728 Found net devices under 0000:09:00.1: cvl_0_1 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:43.728 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:43.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:18:43.987 00:18:43.987 --- 10.0.0.2 ping statistics --- 00:18:43.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.987 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:43.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:18:43.987 00:18:43.987 --- 10.0.0.1 ping statistics --- 00:18:43.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.987 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2370602 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2370602 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2370602 ']' 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.987 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.987 [2024-10-17 16:46:57.539156] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:18:43.987 [2024-10-17 16:46:57.539235] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.987 [2024-10-17 16:46:57.602435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.987 [2024-10-17 16:46:57.658259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.988 [2024-10-17 16:46:57.658340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.988 [2024-10-17 16:46:57.658363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.988 [2024-10-17 16:46:57.658374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.988 [2024-10-17 16:46:57.658384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.988 [2024-10-17 16:46:57.658986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.246 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.246 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:44.246 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:44.246 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.246 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.246 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.246 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:44.246 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:44.505 true 00:18:44.505 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:44.505 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:44.763 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:44.763 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:44.763 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:45.022 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.022 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:45.280 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:45.280 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:45.280 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:45.539 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.539 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:45.797 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:45.797 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:45.797 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.797 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:46.055 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:46.055 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:46.055 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:46.313 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:46.313 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:46.880 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:46.880 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:46.880 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:47.138 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:47.138 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.rbD5lvAYub 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.LY3SAp84rQ 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.rbD5lvAYub 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.LY3SAp84rQ 00:18:47.397 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:47.656 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:48.223 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.rbD5lvAYub 00:18:48.223 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rbD5lvAYub 00:18:48.223 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:48.223 [2024-10-17 16:47:01.874634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.223 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:48.482 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:49.049 [2024-10-17 16:47:02.432174] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.049 [2024-10-17 16:47:02.432460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.049 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:49.049 malloc0 00:18:49.049 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:49.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rbD5lvAYub 00:18:49.615 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:49.872 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.rbD5lvAYub 00:19:02.105 Initializing NVMe Controllers 00:19:02.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:02.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:02.105 Initialization complete. Launching workers. 00:19:02.105 ======================================================== 00:19:02.105 Latency(us) 00:19:02.105 Device Information : IOPS MiB/s Average min max 00:19:02.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7601.45 29.69 8422.36 1262.31 9766.35 00:19:02.105 ======================================================== 00:19:02.105 Total : 7601.45 29.69 8422.36 1262.31 9766.35 00:19:02.105 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rbD5lvAYub 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rbD5lvAYub 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2372504 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2372504 /var/tmp/bdevperf.sock 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2372504 ']' 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.105 [2024-10-17 16:47:13.707995] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:02.105 [2024-10-17 16:47:13.708099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372504 ] 00:19:02.105 [2024-10-17 16:47:13.765079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.105 [2024-10-17 16:47:13.823645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.105 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:02.106 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rbD5lvAYub 00:19:02.106 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.106 [2024-10-17 16:47:14.470111] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.106 TLSTESTn1 00:19:02.106 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:02.106 Running I/O for 10 seconds... 00:19:03.066 3332.00 IOPS, 13.02 MiB/s [2024-10-17T14:47:18.128Z] 3401.00 IOPS, 13.29 MiB/s [2024-10-17T14:47:19.062Z] 3409.67 IOPS, 13.32 MiB/s [2024-10-17T14:47:19.995Z] 3423.50 IOPS, 13.37 MiB/s [2024-10-17T14:47:20.928Z] 3433.80 IOPS, 13.41 MiB/s [2024-10-17T14:47:21.860Z] 3436.33 IOPS, 13.42 MiB/s [2024-10-17T14:47:22.793Z] 3446.43 IOPS, 13.46 MiB/s [2024-10-17T14:47:23.727Z] 3462.00 IOPS, 13.52 MiB/s [2024-10-17T14:47:25.100Z] 3463.22 IOPS, 13.53 MiB/s [2024-10-17T14:47:25.100Z] 3438.60 IOPS, 13.43 MiB/s 00:19:11.410 Latency(us) 00:19:11.410 [2024-10-17T14:47:25.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.410 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:11.410 Verification LBA range: start 0x0 length 0x2000 00:19:11.410 TLSTESTn1 : 10.02 3445.35 13.46 0.00 0.00 37093.67 6213.78 42137.22 00:19:11.410 [2024-10-17T14:47:25.100Z] =================================================================================================================== 00:19:11.410 [2024-10-17T14:47:25.100Z] Total : 3445.35 13.46 0.00 0.00 37093.67 6213.78 42137.22 00:19:11.410 { 00:19:11.410 "results": [ 00:19:11.410 { 00:19:11.410 "job": "TLSTESTn1", 00:19:11.410 "core_mask": "0x4", 00:19:11.410 "workload": "verify", 00:19:11.410 "status": "finished", 00:19:11.410 "verify_range": { 00:19:11.410 "start": 0, 00:19:11.410 "length": 8192 00:19:11.410 }, 00:19:11.410 "queue_depth": 128, 00:19:11.410 "io_size": 4096, 00:19:11.410 "runtime": 10.016967, 00:19:11.410 "iops": 3445.354267414478, 00:19:11.410 "mibps": 13.458415107087804, 00:19:11.410 "io_failed": 0, 00:19:11.410 "io_timeout": 0, 00:19:11.410 "avg_latency_us": 37093.672514981365, 00:19:11.410 "min_latency_us": 6213.783703703703, 00:19:11.410 "max_latency_us": 42137.22074074074 00:19:11.410 } 00:19:11.410 ], 00:19:11.410 "core_count": 1 00:19:11.410 } 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2372504 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2372504 ']' 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2372504 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2372504 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2372504' 00:19:11.410 killing process with pid 2372504 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2372504 00:19:11.410 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.410 00:19:11.410 Latency(us) 00:19:11.410 [2024-10-17T14:47:25.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.410 [2024-10-17T14:47:25.100Z] =================================================================================================================== 00:19:11.410 [2024-10-17T14:47:25.100Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2372504 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LY3SAp84rQ 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LY3SAp84rQ 00:19:11.410 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:11.410 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LY3SAp84rQ 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LY3SAp84rQ 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2373826 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2373826 /var/tmp/bdevperf.sock 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2373826 ']' 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.411 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.411 [2024-10-17 16:47:25.048559] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:11.411 [2024-10-17 16:47:25.048630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373826 ] 00:19:11.669 [2024-10-17 16:47:25.106236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.669 [2024-10-17 16:47:25.162603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.669 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:11.669 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:11.669 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LY3SAp84rQ 00:19:11.926 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.493 [2024-10-17 16:47:25.883172] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:12.493 [2024-10-17 16:47:25.888808] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:12.493 [2024-10-17 16:47:25.889275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2179380 (107): Transport endpoint is not connected 00:19:12.493 [2024-10-17 16:47:25.890263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2179380 (9): Bad file descriptor 00:19:12.493 [2024-10-17 16:47:25.891262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:12.493 [2024-10-17 16:47:25.891306] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:12.493 [2024-10-17 16:47:25.891321] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:12.493 [2024-10-17 16:47:25.891342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:12.493 request: 00:19:12.493 { 00:19:12.493 "name": "TLSTEST", 00:19:12.493 "trtype": "tcp", 00:19:12.493 "traddr": "10.0.0.2", 00:19:12.493 "adrfam": "ipv4", 00:19:12.493 "trsvcid": "4420", 00:19:12.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.493 "prchk_reftag": false, 00:19:12.493 "prchk_guard": false, 00:19:12.493 "hdgst": false, 00:19:12.493 "ddgst": false, 00:19:12.493 "psk": "key0", 00:19:12.493 "allow_unrecognized_csi": false, 00:19:12.493 "method": "bdev_nvme_attach_controller", 00:19:12.493 "req_id": 1 00:19:12.493 } 00:19:12.493 Got JSON-RPC error response 00:19:12.493 response: 00:19:12.493 { 00:19:12.493 "code": -5, 00:19:12.494 "message": "Input/output error" 00:19:12.494 } 00:19:12.494 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2373826 00:19:12.494 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2373826 ']' 00:19:12.494 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2373826 00:19:12.494 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:12.494 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:12.494 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2373826 00:19:12.494 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:12.494 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:12.494 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2373826' 00:19:12.494 killing process with pid 2373826 00:19:12.494 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2373826 00:19:12.494 Received shutdown signal, test time was about 10.000000 seconds 00:19:12.494 00:19:12.494 Latency(us) 00:19:12.494 [2024-10-17T14:47:26.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.494 [2024-10-17T14:47:26.184Z] =================================================================================================================== 00:19:12.494 [2024-10-17T14:47:26.184Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:12.494 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2373826 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rbD5lvAYub 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rbD5lvAYub 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rbD5lvAYub 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rbD5lvAYub 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2373969 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2373969 /var/tmp/bdevperf.sock 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2373969 ']' 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:12.494 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.494 [2024-10-17 16:47:26.180302] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:12.494 [2024-10-17 16:47:26.180397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373969 ] 00:19:12.752 [2024-10-17 16:47:26.238694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.752 [2024-10-17 16:47:26.300689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.752 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.752 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:12.752 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rbD5lvAYub 00:19:13.011 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:13.268 [2024-10-17 16:47:26.932489] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.268 [2024-10-17 16:47:26.939269] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:13.269 [2024-10-17 16:47:26.939298] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:13.269 [2024-10-17 16:47:26.939346] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:13.269 [2024-10-17 16:47:26.939561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e66380 (107): Transport endpoint is not connected 00:19:13.269 [2024-10-17 16:47:26.940545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e66380 (9): Bad file descriptor 00:19:13.269 [2024-10-17 16:47:26.941546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:13.269 [2024-10-17 16:47:26.941566] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:13.269 [2024-10-17 16:47:26.941580] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:13.269 [2024-10-17 16:47:26.941593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:13.269 request: 00:19:13.269 { 00:19:13.269 "name": "TLSTEST", 00:19:13.269 "trtype": "tcp", 00:19:13.269 "traddr": "10.0.0.2", 00:19:13.269 "adrfam": "ipv4", 00:19:13.269 "trsvcid": "4420", 00:19:13.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.269 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:13.269 "prchk_reftag": false, 00:19:13.269 "prchk_guard": false, 00:19:13.269 "hdgst": false, 00:19:13.269 "ddgst": false, 00:19:13.269 "psk": "key0", 00:19:13.269 "allow_unrecognized_csi": false, 00:19:13.269 "method": "bdev_nvme_attach_controller", 00:19:13.269 "req_id": 1 00:19:13.269 } 00:19:13.269 Got JSON-RPC error response 00:19:13.269 response: 00:19:13.269 { 00:19:13.269 "code": -5, 00:19:13.269 "message": "Input/output error" 00:19:13.269 } 00:19:13.528 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2373969 00:19:13.528 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2373969 ']' 00:19:13.528 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2373969 00:19:13.528 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:13.528 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.528 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2373969 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2373969' 00:19:13.528 killing process with pid 2373969 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2373969 00:19:13.528 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.528 00:19:13.528 Latency(us) 00:19:13.528 [2024-10-17T14:47:27.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.528 [2024-10-17T14:47:27.218Z] =================================================================================================================== 00:19:13.528 [2024-10-17T14:47:27.218Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2373969 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rbD5lvAYub 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rbD5lvAYub 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rbD5lvAYub 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rbD5lvAYub 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2374107 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2374107 /var/tmp/bdevperf.sock 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2374107 ']' 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:13.528 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.785 [2024-10-17 16:47:27.250680] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:13.785 [2024-10-17 16:47:27.250773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374107 ] 00:19:13.785 [2024-10-17 16:47:27.310276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.785 [2024-10-17 16:47:27.367775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.043 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:14.043 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:14.043 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rbD5lvAYub 00:19:14.300 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:14.559 [2024-10-17 16:47:27.991410] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.559 [2024-10-17 16:47:28.002036] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:14.559 [2024-10-17 16:47:28.002067] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:14.559 [2024-10-17 16:47:28.002119] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:14.559 [2024-10-17 16:47:28.002769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0380 (107): Transport endpoint is not connected 00:19:14.559 [2024-10-17 16:47:28.003760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0380 (9): Bad file descriptor 00:19:14.559 [2024-10-17 16:47:28.004760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:14.559 [2024-10-17 16:47:28.004783] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:14.559 [2024-10-17 16:47:28.004798] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:14.559 [2024-10-17 16:47:28.004812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:14.559 request: 00:19:14.559 { 00:19:14.559 "name": "TLSTEST", 00:19:14.559 "trtype": "tcp", 00:19:14.559 "traddr": "10.0.0.2", 00:19:14.559 "adrfam": "ipv4", 00:19:14.559 "trsvcid": "4420", 00:19:14.559 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:14.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.559 "prchk_reftag": false, 00:19:14.559 "prchk_guard": false, 00:19:14.559 "hdgst": false, 00:19:14.559 "ddgst": false, 00:19:14.559 "psk": "key0", 00:19:14.559 "allow_unrecognized_csi": false, 00:19:14.559 "method": "bdev_nvme_attach_controller", 00:19:14.559 "req_id": 1 00:19:14.559 } 00:19:14.559 Got JSON-RPC error response 00:19:14.559 response: 00:19:14.559 { 00:19:14.559 "code": -5, 00:19:14.559 "message": "Input/output error" 00:19:14.559 } 00:19:14.559 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2374107 00:19:14.559 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2374107 ']' 00:19:14.559 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2374107 00:19:14.559 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:14.559 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.559 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2374107 00:19:14.559 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:14.559 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:14.559 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2374107' 00:19:14.559 killing process with pid 2374107 00:19:14.559 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2374107 00:19:14.559 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.559 00:19:14.559 Latency(us) 00:19:14.559 [2024-10-17T14:47:28.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.559 [2024-10-17T14:47:28.249Z] =================================================================================================================== 00:19:14.559 [2024-10-17T14:47:28.249Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.559 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2374107 00:19:14.817 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:14.817 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:14.817 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:14.817 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:14.817 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:14.817 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:14.817 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:14.817 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2374254 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2374254 /var/tmp/bdevperf.sock 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2374254 ']' 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.818 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.818 [2024-10-17 16:47:28.313276] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:14.818 [2024-10-17 16:47:28.313376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374254 ] 00:19:14.818 [2024-10-17 16:47:28.370255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.818 [2024-10-17 16:47:28.424508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.076 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.076 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:15.076 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:15.333 [2024-10-17 16:47:28.788956] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:15.333 [2024-10-17 16:47:28.789039] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:15.333 request: 00:19:15.333 { 00:19:15.333 "name": "key0", 00:19:15.333 "path": "", 00:19:15.333 "method": "keyring_file_add_key", 00:19:15.333 "req_id": 1 00:19:15.333 } 00:19:15.333 Got JSON-RPC error response 00:19:15.333 response: 00:19:15.333 { 00:19:15.333 "code": -1, 00:19:15.333 "message": "Operation not permitted" 00:19:15.333 } 00:19:15.333 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:15.592 [2024-10-17 16:47:29.057785] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.592 [2024-10-17 16:47:29.057840] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:15.592 request: 00:19:15.592 { 00:19:15.592 "name": "TLSTEST", 00:19:15.592 "trtype": "tcp", 00:19:15.592 "traddr": "10.0.0.2", 00:19:15.592 "adrfam": "ipv4", 00:19:15.592 "trsvcid": "4420", 00:19:15.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.592 "prchk_reftag": false, 00:19:15.592 "prchk_guard": false, 00:19:15.592 "hdgst": false, 00:19:15.592 "ddgst": false, 00:19:15.592 "psk": "key0", 00:19:15.592 "allow_unrecognized_csi": false, 00:19:15.592 "method": "bdev_nvme_attach_controller", 00:19:15.592 "req_id": 1 00:19:15.592 } 00:19:15.592 Got JSON-RPC error response 00:19:15.592 response: 00:19:15.592 { 00:19:15.592 "code": -126, 00:19:15.592 "message": "Required key not available" 00:19:15.592 } 00:19:15.592 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2374254 00:19:15.592 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2374254 ']' 00:19:15.592 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2374254 00:19:15.592 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:15.592 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.592 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2374254 00:19:15.592 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:15.592 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:15.592 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2374254' 00:19:15.592 killing process with pid 2374254 00:19:15.592 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2374254 00:19:15.592 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.592 00:19:15.592 Latency(us) 00:19:15.592 [2024-10-17T14:47:29.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.592 [2024-10-17T14:47:29.282Z] =================================================================================================================== 00:19:15.592 [2024-10-17T14:47:29.282Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.592 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2374254 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2370602 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2370602 ']' 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2370602 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2370602 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2370602' 00:19:15.850 killing process with pid 2370602 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2370602 00:19:15.850 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2370602 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ks8EnVjUHZ 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ks8EnVjUHZ 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:16.108 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2374407 00:19:16.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:16.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2374407 00:19:16.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2374407 ']' 00:19:16.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:16.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:16.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.109 [2024-10-17 16:47:29.660381] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:16.109 [2024-10-17 16:47:29.660493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.109 [2024-10-17 16:47:29.728938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.109 [2024-10-17 16:47:29.788166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.109 [2024-10-17 16:47:29.788235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.109 [2024-10-17 16:47:29.788262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.109 [2024-10-17 16:47:29.788275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.109 [2024-10-17 16:47:29.788286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.109 [2024-10-17 16:47:29.788931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:16.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:16.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:16.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:16.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ks8EnVjUHZ 00:19:16.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ks8EnVjUHZ 00:19:16.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:16.625 [2024-10-17 16:47:30.204499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.625 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:16.883 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:17.141 [2024-10-17 16:47:30.753943] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:17.141 [2024-10-17 16:47:30.754233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.141 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:17.399 malloc0 00:19:17.399 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:17.657 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ks8EnVjUHZ 00:19:17.915 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ks8EnVjUHZ 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ks8EnVjUHZ 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2374692 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2374692 /var/tmp/bdevperf.sock 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2374692 ']' 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.481 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.481 [2024-10-17 16:47:31.979634] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:18.481 [2024-10-17 16:47:31.979705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374692 ] 00:19:18.481 [2024-10-17 16:47:32.035623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.481 [2024-10-17 16:47:32.091735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.738 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.738 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:18.738 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ks8EnVjUHZ 00:19:18.996 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:19.254 [2024-10-17 16:47:32.761492] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.254 TLSTESTn1 00:19:19.254 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:19.512 Running I/O for 10 seconds... 00:19:21.370 3305.00 IOPS, 12.91 MiB/s [2024-10-17T14:47:35.993Z] 3387.50 IOPS, 13.23 MiB/s [2024-10-17T14:47:37.364Z] 3417.67 IOPS, 13.35 MiB/s [2024-10-17T14:47:38.298Z] 3417.75 IOPS, 13.35 MiB/s [2024-10-17T14:47:39.231Z] 3434.20 IOPS, 13.41 MiB/s [2024-10-17T14:47:40.166Z] 3434.33 IOPS, 13.42 MiB/s [2024-10-17T14:47:41.098Z] 3446.71 IOPS, 13.46 MiB/s [2024-10-17T14:47:42.071Z] 3455.12 IOPS, 13.50 MiB/s [2024-10-17T14:47:43.033Z] 3459.78 IOPS, 13.51 MiB/s [2024-10-17T14:47:43.291Z] 3462.40 IOPS, 13.53 MiB/s 00:19:29.601 Latency(us) 00:19:29.601 [2024-10-17T14:47:43.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.601 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:29.601 Verification LBA range: start 0x0 length 0x2000 00:19:29.601 TLSTESTn1 : 10.03 3466.15 13.54 0.00 0.00 36870.76 6262.33 43108.12 00:19:29.601 [2024-10-17T14:47:43.291Z] =================================================================================================================== 00:19:29.601 [2024-10-17T14:47:43.291Z] Total : 3466.15 13.54 0.00 0.00 36870.76 6262.33 43108.12 00:19:29.601 { 00:19:29.601 "results": [ 00:19:29.601 { 00:19:29.601 "job": "TLSTESTn1", 00:19:29.601 "core_mask": "0x4", 00:19:29.601 "workload": "verify", 00:19:29.601 "status": "finished", 00:19:29.601 "verify_range": { 00:19:29.601 "start": 0, 00:19:29.601 "length": 8192 00:19:29.601 }, 00:19:29.601 "queue_depth": 128, 00:19:29.601 "io_size": 4096, 00:19:29.601 "runtime": 10.025821, 00:19:29.601 "iops": 3466.150053945707, 00:19:29.601 "mibps": 13.539648648225418, 00:19:29.601 "io_failed": 0, 00:19:29.601 "io_timeout": 0, 00:19:29.601 "avg_latency_us": 36870.75614284481, 00:19:29.601 "min_latency_us": 6262.328888888889, 00:19:29.601 "max_latency_us": 43108.124444444446 00:19:29.601 } 00:19:29.601 ], 00:19:29.601 "core_count": 1 00:19:29.601 } 00:19:29.601 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.601 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2374692 00:19:29.601 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2374692 ']' 00:19:29.601 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2374692 00:19:29.601 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:29.601 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.601 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2374692 00:19:29.601 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:29.601 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:29.601 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2374692' 00:19:29.601 killing process with pid 2374692 00:19:29.601 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2374692 00:19:29.601 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.601 00:19:29.601 Latency(us) 00:19:29.601 [2024-10-17T14:47:43.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.601 [2024-10-17T14:47:43.291Z] =================================================================================================================== 00:19:29.601 [2024-10-17T14:47:43.291Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.601 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2374692 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ks8EnVjUHZ 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ks8EnVjUHZ 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ks8EnVjUHZ 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ks8EnVjUHZ 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ks8EnVjUHZ 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2376014 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2376014 /var/tmp/bdevperf.sock 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2376014 ']' 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.860 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.860 [2024-10-17 16:47:43.356056] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:29.860 [2024-10-17 16:47:43.356155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376014 ] 00:19:29.860 [2024-10-17 16:47:43.412965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.860 [2024-10-17 16:47:43.467390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.117 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.117 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:30.117 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ks8EnVjUHZ 00:19:30.375 [2024-10-17 16:47:43.830656] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ks8EnVjUHZ': 0100666 00:19:30.375 [2024-10-17 16:47:43.830702] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:30.375 request: 00:19:30.375 { 00:19:30.375 "name": "key0", 00:19:30.375 "path": "/tmp/tmp.ks8EnVjUHZ", 00:19:30.375 "method": "keyring_file_add_key", 00:19:30.375 "req_id": 1 00:19:30.375 } 00:19:30.375 Got JSON-RPC error response 00:19:30.375 response: 00:19:30.375 { 00:19:30.375 "code": -1, 00:19:30.375 "message": "Operation not permitted" 00:19:30.375 } 00:19:30.375 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.633 [2024-10-17 16:47:44.091495] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.633 [2024-10-17 16:47:44.091552] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:30.633 request: 00:19:30.633 { 00:19:30.633 "name": "TLSTEST", 00:19:30.633 "trtype": "tcp", 00:19:30.633 "traddr": "10.0.0.2", 00:19:30.633 "adrfam": "ipv4", 00:19:30.633 "trsvcid": "4420", 00:19:30.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.633 "prchk_reftag": false, 00:19:30.633 "prchk_guard": false, 00:19:30.633 "hdgst": false, 00:19:30.633 "ddgst": false, 00:19:30.633 "psk": "key0", 00:19:30.633 "allow_unrecognized_csi": false, 00:19:30.633 "method": "bdev_nvme_attach_controller", 00:19:30.633 "req_id": 1 00:19:30.633 } 00:19:30.633 Got JSON-RPC error response 00:19:30.633 response: 00:19:30.633 { 00:19:30.633 "code": -126, 00:19:30.633 "message": "Required key not available" 00:19:30.633 } 00:19:30.633 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2376014 00:19:30.633 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2376014 ']' 00:19:30.633 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2376014 00:19:30.633 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:30.633 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.633 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2376014 00:19:30.633 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:30.633 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:30.633 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2376014' 00:19:30.633 killing process with pid 2376014 00:19:30.633 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2376014 00:19:30.633 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.633 00:19:30.633 Latency(us) 00:19:30.633 [2024-10-17T14:47:44.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.633 [2024-10-17T14:47:44.323Z] =================================================================================================================== 00:19:30.633 [2024-10-17T14:47:44.323Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:30.633 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2376014 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2374407 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2374407 ']' 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2374407 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2374407 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2374407' 00:19:30.891 killing process with pid 2374407 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2374407 00:19:30.891 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2374407 00:19:31.149 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:31.149 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:31.149 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:31.149 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.149 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2376169 00:19:31.149 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:31.149 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2376169 00:19:31.149 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2376169 ']' 00:19:31.150 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.150 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.150 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.150 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.150 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.150 [2024-10-17 16:47:44.689161] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:31.150 [2024-10-17 16:47:44.689260] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.150 [2024-10-17 16:47:44.750972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.150 [2024-10-17 16:47:44.808582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.150 [2024-10-17 16:47:44.808643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.150 [2024-10-17 16:47:44.808671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.150 [2024-10-17 16:47:44.808685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.150 [2024-10-17 16:47:44.808696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.150 [2024-10-17 16:47:44.809348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ks8EnVjUHZ 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ks8EnVjUHZ 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.ks8EnVjUHZ 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ks8EnVjUHZ 00:19:31.408 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:31.666 [2024-10-17 16:47:45.193134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.666 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:31.924 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:32.181 [2024-10-17 16:47:45.790735] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.181 [2024-10-17 16:47:45.791013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.181 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:32.439 malloc0 00:19:32.439 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:32.697 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ks8EnVjUHZ 00:19:33.263 [2024-10-17 16:47:46.651463] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ks8EnVjUHZ': 0100666 00:19:33.263 [2024-10-17 16:47:46.651509] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:33.263 request: 00:19:33.263 { 00:19:33.263 "name": "key0", 00:19:33.263 "path": "/tmp/tmp.ks8EnVjUHZ", 00:19:33.263 "method": "keyring_file_add_key", 00:19:33.263 "req_id": 1 00:19:33.263 } 00:19:33.263 Got JSON-RPC error response 00:19:33.263 response: 00:19:33.263 { 00:19:33.263 "code": -1, 00:19:33.263 "message": "Operation not permitted" 00:19:33.263 } 00:19:33.263 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:33.521 [2024-10-17 16:47:46.976348] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:33.521 [2024-10-17 16:47:46.976403] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:33.521 request: 00:19:33.521 { 00:19:33.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.521 "host": "nqn.2016-06.io.spdk:host1", 00:19:33.521 "psk": "key0", 00:19:33.521 "method": "nvmf_subsystem_add_host", 00:19:33.521 "req_id": 1 00:19:33.522 } 00:19:33.522 Got JSON-RPC error response 00:19:33.522 response: 00:19:33.522 { 00:19:33.522 "code": -32603, 00:19:33.522 "message": "Internal error" 00:19:33.522 } 00:19:33.522 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:33.522 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.522 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.522 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.522 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2376169 00:19:33.522 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2376169 ']' 00:19:33.522 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2376169 00:19:33.522 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:33.522 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.522 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2376169 00:19:33.522 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:33.522 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:33.522 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2376169' 00:19:33.522 killing process with pid 2376169 00:19:33.522 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2376169 00:19:33.522 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2376169 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ks8EnVjUHZ 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2376468 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2376468 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2376468 ']' 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.780 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.780 [2024-10-17 16:47:47.343011] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:33.780 [2024-10-17 16:47:47.343119] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.780 [2024-10-17 16:47:47.410994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.780 [2024-10-17 16:47:47.469260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.780 [2024-10-17 16:47:47.469331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.780 [2024-10-17 16:47:47.469362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.780 [2024-10-17 16:47:47.469372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.780 [2024-10-17 16:47:47.469381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.780 [2024-10-17 16:47:47.469866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.038 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.038 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:34.038 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:34.038 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.038 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.038 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.038 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ks8EnVjUHZ 00:19:34.038 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ks8EnVjUHZ 00:19:34.038 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:34.297 [2024-10-17 16:47:47.850665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.297 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:34.555 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:34.813 [2024-10-17 16:47:48.388189] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.813 [2024-10-17 16:47:48.388491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.813 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:35.071 malloc0 00:19:35.071 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:35.330 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ks8EnVjUHZ 00:19:35.588 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.154 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2376755 00:19:36.154 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.154 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2376755 /var/tmp/bdevperf.sock 00:19:36.154 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2376755 ']' 00:19:36.154 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.155 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.155 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.155 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.155 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.155 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.155 [2024-10-17 16:47:49.584687] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:36.155 [2024-10-17 16:47:49.584779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376755 ] 00:19:36.155 [2024-10-17 16:47:49.642239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.155 [2024-10-17 16:47:49.700240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.155 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.155 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:36.155 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ks8EnVjUHZ 00:19:36.415 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.675 [2024-10-17 16:47:50.332236] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.933 TLSTESTn1 00:19:36.933 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:37.190 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:37.190 "subsystems": [ 00:19:37.190 { 00:19:37.190 "subsystem": "keyring", 00:19:37.190 "config": [ 00:19:37.190 { 00:19:37.190 "method": "keyring_file_add_key", 00:19:37.190 "params": { 00:19:37.190 "name": "key0", 00:19:37.190 "path": "/tmp/tmp.ks8EnVjUHZ" 00:19:37.190 } 00:19:37.190 } 00:19:37.190 ] 00:19:37.190 }, 00:19:37.190 { 00:19:37.190 "subsystem": "iobuf", 00:19:37.190 "config": [ 00:19:37.190 { 00:19:37.190 "method": "iobuf_set_options", 00:19:37.190 "params": { 00:19:37.190 "small_pool_count": 8192, 00:19:37.190 "large_pool_count": 1024, 00:19:37.190 "small_bufsize": 8192, 00:19:37.190 "large_bufsize": 135168 00:19:37.190 } 00:19:37.190 } 00:19:37.190 ] 00:19:37.190 }, 00:19:37.190 { 00:19:37.190 "subsystem": "sock", 00:19:37.190 "config": [ 00:19:37.190 { 00:19:37.190 "method": "sock_set_default_impl", 00:19:37.190 "params": { 00:19:37.190 "impl_name": "posix" 00:19:37.190 } 00:19:37.190 }, 00:19:37.190 { 00:19:37.190 "method": "sock_impl_set_options", 00:19:37.190 "params": { 00:19:37.190 "impl_name": "ssl", 00:19:37.190 "recv_buf_size": 4096, 00:19:37.190 "send_buf_size": 4096, 00:19:37.190 "enable_recv_pipe": true, 00:19:37.190 "enable_quickack": false, 00:19:37.190 "enable_placement_id": 0, 00:19:37.190 "enable_zerocopy_send_server": true, 00:19:37.190 "enable_zerocopy_send_client": false, 00:19:37.190 "zerocopy_threshold": 0, 00:19:37.190 "tls_version": 0, 00:19:37.190 "enable_ktls": false 00:19:37.190 } 00:19:37.190 }, 00:19:37.190 { 00:19:37.190 "method": "sock_impl_set_options", 00:19:37.190 "params": { 00:19:37.190 "impl_name": "posix", 00:19:37.191 "recv_buf_size": 2097152, 00:19:37.191 "send_buf_size": 2097152, 00:19:37.191 "enable_recv_pipe": true, 00:19:37.191 "enable_quickack": false, 00:19:37.191 "enable_placement_id": 0, 00:19:37.191 "enable_zerocopy_send_server": true, 00:19:37.191 "enable_zerocopy_send_client": false, 00:19:37.191 "zerocopy_threshold": 0, 00:19:37.191 "tls_version": 0, 00:19:37.191 "enable_ktls": false 00:19:37.191 } 00:19:37.191 } 00:19:37.191 ] 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "subsystem": "vmd", 00:19:37.191 "config": [] 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "subsystem": "accel", 00:19:37.191 "config": [ 00:19:37.191 { 00:19:37.191 "method": "accel_set_options", 00:19:37.191 "params": { 00:19:37.191 "small_cache_size": 128, 00:19:37.191 "large_cache_size": 16, 00:19:37.191 "task_count": 2048, 00:19:37.191 "sequence_count": 2048, 00:19:37.191 "buf_count": 2048 00:19:37.191 } 00:19:37.191 } 00:19:37.191 ] 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "subsystem": "bdev", 00:19:37.191 "config": [ 00:19:37.191 { 00:19:37.191 "method": "bdev_set_options", 00:19:37.191 "params": { 00:19:37.191 "bdev_io_pool_size": 65535, 00:19:37.191 "bdev_io_cache_size": 256, 00:19:37.191 "bdev_auto_examine": true, 00:19:37.191 "iobuf_small_cache_size": 128, 00:19:37.191 "iobuf_large_cache_size": 16 00:19:37.191 } 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "method": "bdev_raid_set_options", 00:19:37.191 "params": { 00:19:37.191 "process_window_size_kb": 1024, 00:19:37.191 "process_max_bandwidth_mb_sec": 0 00:19:37.191 } 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "method": "bdev_iscsi_set_options", 00:19:37.191 "params": { 00:19:37.191 "timeout_sec": 30 00:19:37.191 } 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "method": "bdev_nvme_set_options", 00:19:37.191 "params": { 00:19:37.191 "action_on_timeout": "none", 00:19:37.191 "timeout_us": 0, 00:19:37.191 "timeout_admin_us": 0, 00:19:37.191 "keep_alive_timeout_ms": 10000, 00:19:37.191 "arbitration_burst": 0, 00:19:37.191 "low_priority_weight": 0, 00:19:37.191 "medium_priority_weight": 0, 00:19:37.191 "high_priority_weight": 0, 00:19:37.191 "nvme_adminq_poll_period_us": 10000, 00:19:37.191 "nvme_ioq_poll_period_us": 0, 00:19:37.191 "io_queue_requests": 0, 00:19:37.191 "delay_cmd_submit": true, 00:19:37.191 "transport_retry_count": 4, 00:19:37.191 "bdev_retry_count": 3, 00:19:37.191 "transport_ack_timeout": 0, 00:19:37.191 "ctrlr_loss_timeout_sec": 0, 00:19:37.191 "reconnect_delay_sec": 0, 00:19:37.191 "fast_io_fail_timeout_sec": 0, 00:19:37.191 "disable_auto_failback": false, 00:19:37.191 "generate_uuids": false, 00:19:37.191 "transport_tos": 0, 00:19:37.191 "nvme_error_stat": false, 00:19:37.191 "rdma_srq_size": 0, 00:19:37.191 "io_path_stat": false, 00:19:37.191 "allow_accel_sequence": false, 00:19:37.191 "rdma_max_cq_size": 0, 00:19:37.191 "rdma_cm_event_timeout_ms": 0, 00:19:37.191 "dhchap_digests": [ 00:19:37.191 "sha256", 00:19:37.191 "sha384", 00:19:37.191 "sha512" 00:19:37.191 ], 00:19:37.191 "dhchap_dhgroups": [ 00:19:37.191 "null", 00:19:37.191 "ffdhe2048", 00:19:37.191 "ffdhe3072", 00:19:37.191 "ffdhe4096", 00:19:37.191 "ffdhe6144", 00:19:37.191 "ffdhe8192" 00:19:37.191 ] 00:19:37.191 } 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "method": "bdev_nvme_set_hotplug", 00:19:37.191 "params": { 00:19:37.191 "period_us": 100000, 00:19:37.191 "enable": false 00:19:37.191 } 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "method": "bdev_malloc_create", 00:19:37.191 "params": { 00:19:37.191 "name": "malloc0", 00:19:37.191 "num_blocks": 8192, 00:19:37.191 "block_size": 4096, 00:19:37.191 "physical_block_size": 4096, 00:19:37.191 "uuid": "97d1b5d3-3d42-43f2-87f8-3b2c4a06970d", 00:19:37.191 "optimal_io_boundary": 0, 00:19:37.191 "md_size": 0, 00:19:37.191 "dif_type": 0, 00:19:37.191 "dif_is_head_of_md": false, 00:19:37.191 "dif_pi_format": 0 00:19:37.191 } 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "method": "bdev_wait_for_examine" 00:19:37.191 } 00:19:37.191 ] 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "subsystem": "nbd", 00:19:37.191 "config": [] 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "subsystem": "scheduler", 00:19:37.191 "config": [ 00:19:37.191 { 00:19:37.191 "method": "framework_set_scheduler", 00:19:37.191 "params": { 00:19:37.191 "name": "static" 00:19:37.191 } 00:19:37.191 } 00:19:37.191 ] 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "subsystem": "nvmf", 00:19:37.191 "config": [ 00:19:37.191 { 00:19:37.191 "method": "nvmf_set_config", 00:19:37.191 "params": { 00:19:37.191 "discovery_filter": "match_any", 00:19:37.191 "admin_cmd_passthru": { 00:19:37.191 "identify_ctrlr": false 00:19:37.191 }, 00:19:37.191 "dhchap_digests": [ 00:19:37.191 "sha256", 00:19:37.191 "sha384", 00:19:37.191 "sha512" 00:19:37.191 ], 00:19:37.191 "dhchap_dhgroups": [ 00:19:37.191 "null", 00:19:37.191 "ffdhe2048", 00:19:37.191 "ffdhe3072", 00:19:37.191 "ffdhe4096", 00:19:37.191 "ffdhe6144", 00:19:37.191 "ffdhe8192" 00:19:37.191 ] 00:19:37.191 } 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "method": "nvmf_set_max_subsystems", 00:19:37.191 "params": { 00:19:37.191 "max_subsystems": 1024 00:19:37.191 } 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "method": "nvmf_set_crdt", 00:19:37.191 "params": { 00:19:37.191 "crdt1": 0, 00:19:37.191 "crdt2": 0, 00:19:37.191 "crdt3": 0 00:19:37.191 } 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "method": "nvmf_create_transport", 00:19:37.191 "params": { 00:19:37.191 "trtype": "TCP", 00:19:37.191 "max_queue_depth": 128, 00:19:37.191 "max_io_qpairs_per_ctrlr": 127, 00:19:37.191 "in_capsule_data_size": 4096, 00:19:37.191 "max_io_size": 131072, 00:19:37.191 "io_unit_size": 131072, 00:19:37.191 "max_aq_depth": 128, 00:19:37.191 "num_shared_buffers": 511, 00:19:37.191 "buf_cache_size": 4294967295, 00:19:37.191 "dif_insert_or_strip": false, 00:19:37.191 "zcopy": false, 00:19:37.191 "c2h_success": false, 00:19:37.191 "sock_priority": 0, 00:19:37.191 "abort_timeout_sec": 1, 00:19:37.191 "ack_timeout": 0, 00:19:37.191 "data_wr_pool_size": 0 00:19:37.191 } 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "method": "nvmf_create_subsystem", 00:19:37.191 "params": { 00:19:37.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.191 "allow_any_host": false, 00:19:37.191 "serial_number": "SPDK00000000000001", 00:19:37.191 "model_number": "SPDK bdev Controller", 00:19:37.191 "max_namespaces": 10, 00:19:37.191 "min_cntlid": 1, 00:19:37.191 "max_cntlid": 65519, 00:19:37.191 "ana_reporting": false 00:19:37.191 } 00:19:37.191 }, 00:19:37.191 { 00:19:37.191 "method": "nvmf_subsystem_add_host", 00:19:37.191 "params": { 00:19:37.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.191 "host": "nqn.2016-06.io.spdk:host1", 00:19:37.191 "psk": "key0" 00:19:37.191 } 00:19:37.191 }, 00:19:37.191 { 00:19:37.192 "method": "nvmf_subsystem_add_ns", 00:19:37.192 "params": { 00:19:37.192 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.192 "namespace": { 00:19:37.192 "nsid": 1, 00:19:37.192 "bdev_name": "malloc0", 00:19:37.192 "nguid": "97D1B5D33D4243F287F83B2C4A06970D", 00:19:37.192 "uuid": "97d1b5d3-3d42-43f2-87f8-3b2c4a06970d", 00:19:37.192 "no_auto_visible": false 00:19:37.192 } 00:19:37.192 } 00:19:37.192 }, 00:19:37.192 { 00:19:37.192 "method": "nvmf_subsystem_add_listener", 00:19:37.192 "params": { 00:19:37.192 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.192 "listen_address": { 00:19:37.192 "trtype": "TCP", 00:19:37.192 "adrfam": "IPv4", 00:19:37.192 "traddr": "10.0.0.2", 00:19:37.192 "trsvcid": "4420" 00:19:37.192 }, 00:19:37.192 "secure_channel": true 00:19:37.192 } 00:19:37.192 } 00:19:37.192 ] 00:19:37.192 } 00:19:37.192 ] 00:19:37.192 }' 00:19:37.192 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:37.450 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:37.450 "subsystems": [ 00:19:37.450 { 00:19:37.450 "subsystem": "keyring", 00:19:37.450 "config": [ 00:19:37.450 { 00:19:37.450 "method": "keyring_file_add_key", 00:19:37.450 "params": { 00:19:37.450 "name": "key0", 00:19:37.450 "path": "/tmp/tmp.ks8EnVjUHZ" 00:19:37.450 } 00:19:37.450 } 00:19:37.450 ] 00:19:37.450 }, 00:19:37.450 { 00:19:37.450 "subsystem": "iobuf", 00:19:37.450 "config": [ 00:19:37.450 { 00:19:37.450 "method": "iobuf_set_options", 00:19:37.450 "params": { 00:19:37.450 "small_pool_count": 8192, 00:19:37.450 "large_pool_count": 1024, 00:19:37.450 "small_bufsize": 8192, 00:19:37.450 "large_bufsize": 135168 00:19:37.450 } 00:19:37.450 } 00:19:37.450 ] 00:19:37.450 }, 00:19:37.450 { 00:19:37.450 "subsystem": "sock", 00:19:37.450 "config": [ 00:19:37.450 { 00:19:37.450 "method": "sock_set_default_impl", 00:19:37.450 "params": { 00:19:37.450 "impl_name": "posix" 00:19:37.450 } 00:19:37.450 }, 00:19:37.450 { 00:19:37.450 "method": "sock_impl_set_options", 00:19:37.450 "params": { 00:19:37.450 "impl_name": "ssl", 00:19:37.450 "recv_buf_size": 4096, 00:19:37.450 "send_buf_size": 4096, 00:19:37.450 "enable_recv_pipe": true, 00:19:37.450 "enable_quickack": false, 00:19:37.450 "enable_placement_id": 0, 00:19:37.450 "enable_zerocopy_send_server": true, 00:19:37.450 "enable_zerocopy_send_client": false, 00:19:37.450 "zerocopy_threshold": 0, 00:19:37.450 "tls_version": 0, 00:19:37.450 "enable_ktls": false 00:19:37.450 } 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "method": "sock_impl_set_options", 00:19:37.451 "params": { 00:19:37.451 "impl_name": "posix", 00:19:37.451 "recv_buf_size": 2097152, 00:19:37.451 "send_buf_size": 2097152, 00:19:37.451 "enable_recv_pipe": true, 00:19:37.451 "enable_quickack": false, 00:19:37.451 "enable_placement_id": 0, 00:19:37.451 "enable_zerocopy_send_server": true, 00:19:37.451 "enable_zerocopy_send_client": false, 00:19:37.451 "zerocopy_threshold": 0, 00:19:37.451 "tls_version": 0, 00:19:37.451 "enable_ktls": false 00:19:37.451 } 00:19:37.451 } 00:19:37.451 ] 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "subsystem": "vmd", 00:19:37.451 "config": [] 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "subsystem": "accel", 00:19:37.451 "config": [ 00:19:37.451 { 00:19:37.451 "method": "accel_set_options", 00:19:37.451 "params": { 00:19:37.451 "small_cache_size": 128, 00:19:37.451 "large_cache_size": 16, 00:19:37.451 "task_count": 2048, 00:19:37.451 "sequence_count": 2048, 00:19:37.451 "buf_count": 2048 00:19:37.451 } 00:19:37.451 } 00:19:37.451 ] 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "subsystem": "bdev", 00:19:37.451 "config": [ 00:19:37.451 { 00:19:37.451 "method": "bdev_set_options", 00:19:37.451 "params": { 00:19:37.451 "bdev_io_pool_size": 65535, 00:19:37.451 "bdev_io_cache_size": 256, 00:19:37.451 "bdev_auto_examine": true, 00:19:37.451 "iobuf_small_cache_size": 128, 00:19:37.451 "iobuf_large_cache_size": 16 00:19:37.451 } 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "method": "bdev_raid_set_options", 00:19:37.451 "params": { 00:19:37.451 "process_window_size_kb": 1024, 00:19:37.451 "process_max_bandwidth_mb_sec": 0 00:19:37.451 } 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "method": "bdev_iscsi_set_options", 00:19:37.451 "params": { 00:19:37.451 "timeout_sec": 30 00:19:37.451 } 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "method": "bdev_nvme_set_options", 00:19:37.451 "params": { 00:19:37.451 "action_on_timeout": "none", 00:19:37.451 "timeout_us": 0, 00:19:37.451 "timeout_admin_us": 0, 00:19:37.451 "keep_alive_timeout_ms": 10000, 00:19:37.451 "arbitration_burst": 0, 00:19:37.451 "low_priority_weight": 0, 00:19:37.451 "medium_priority_weight": 0, 00:19:37.451 "high_priority_weight": 0, 00:19:37.451 "nvme_adminq_poll_period_us": 10000, 00:19:37.451 "nvme_ioq_poll_period_us": 0, 00:19:37.451 "io_queue_requests": 512, 00:19:37.451 "delay_cmd_submit": true, 00:19:37.451 "transport_retry_count": 4, 00:19:37.451 "bdev_retry_count": 3, 00:19:37.451 "transport_ack_timeout": 0, 00:19:37.451 "ctrlr_loss_timeout_sec": 0, 00:19:37.451 "reconnect_delay_sec": 0, 00:19:37.451 "fast_io_fail_timeout_sec": 0, 00:19:37.451 "disable_auto_failback": false, 00:19:37.451 "generate_uuids": false, 00:19:37.451 "transport_tos": 0, 00:19:37.451 "nvme_error_stat": false, 00:19:37.451 "rdma_srq_size": 0, 00:19:37.451 "io_path_stat": false, 00:19:37.451 "allow_accel_sequence": false, 00:19:37.451 "rdma_max_cq_size": 0, 00:19:37.451 "rdma_cm_event_timeout_ms": 0, 00:19:37.451 "dhchap_digests": [ 00:19:37.451 "sha256", 00:19:37.451 "sha384", 00:19:37.451 "sha512" 00:19:37.451 ], 00:19:37.451 "dhchap_dhgroups": [ 00:19:37.451 "null", 00:19:37.451 "ffdhe2048", 00:19:37.451 "ffdhe3072", 00:19:37.451 "ffdhe4096", 00:19:37.451 "ffdhe6144", 00:19:37.451 "ffdhe8192" 00:19:37.451 ] 00:19:37.451 } 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "method": "bdev_nvme_attach_controller", 00:19:37.451 "params": { 00:19:37.451 "name": "TLSTEST", 00:19:37.451 "trtype": "TCP", 00:19:37.451 "adrfam": "IPv4", 00:19:37.451 "traddr": "10.0.0.2", 00:19:37.451 "trsvcid": "4420", 00:19:37.451 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.451 "prchk_reftag": false, 00:19:37.451 "prchk_guard": false, 00:19:37.451 "ctrlr_loss_timeout_sec": 0, 00:19:37.451 "reconnect_delay_sec": 0, 00:19:37.451 "fast_io_fail_timeout_sec": 0, 00:19:37.451 "psk": "key0", 00:19:37.451 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.451 "hdgst": false, 00:19:37.451 "ddgst": false, 00:19:37.451 "multipath": "multipath" 00:19:37.451 } 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "method": "bdev_nvme_set_hotplug", 00:19:37.451 "params": { 00:19:37.451 "period_us": 100000, 00:19:37.451 "enable": false 00:19:37.451 } 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "method": "bdev_wait_for_examine" 00:19:37.451 } 00:19:37.451 ] 00:19:37.451 }, 00:19:37.451 { 00:19:37.451 "subsystem": "nbd", 00:19:37.451 "config": [] 00:19:37.451 } 00:19:37.451 ] 00:19:37.451 }' 00:19:37.451 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2376755 00:19:37.451 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2376755 ']' 00:19:37.451 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2376755 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2376755 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2376755' 00:19:37.710 killing process with pid 2376755 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2376755 00:19:37.710 Received shutdown signal, test time was about 10.000000 seconds 00:19:37.710 00:19:37.710 Latency(us) 00:19:37.710 [2024-10-17T14:47:51.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.710 [2024-10-17T14:47:51.400Z] =================================================================================================================== 00:19:37.710 [2024-10-17T14:47:51.400Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2376755 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2376468 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2376468 ']' 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2376468 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.710 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2376468 00:19:37.969 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:37.969 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:37.969 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2376468' 00:19:37.969 killing process with pid 2376468 00:19:37.969 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2376468 00:19:37.969 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2376468 00:19:37.969 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:37.969 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:37.969 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:37.969 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:37.969 "subsystems": [ 00:19:37.969 { 00:19:37.969 "subsystem": "keyring", 00:19:37.969 "config": [ 00:19:37.969 { 00:19:37.969 "method": "keyring_file_add_key", 00:19:37.969 "params": { 00:19:37.969 "name": "key0", 00:19:37.969 "path": "/tmp/tmp.ks8EnVjUHZ" 00:19:37.969 } 00:19:37.969 } 00:19:37.969 ] 00:19:37.969 }, 00:19:37.969 { 00:19:37.969 "subsystem": "iobuf", 00:19:37.969 "config": [ 00:19:37.969 { 00:19:37.969 "method": "iobuf_set_options", 00:19:37.969 "params": { 00:19:37.969 "small_pool_count": 8192, 00:19:37.969 "large_pool_count": 1024, 00:19:37.969 "small_bufsize": 8192, 00:19:37.969 "large_bufsize": 135168 00:19:37.969 } 00:19:37.969 } 00:19:37.969 ] 00:19:37.969 }, 00:19:37.969 { 00:19:37.969 "subsystem": "sock", 00:19:37.969 "config": [ 00:19:37.969 { 00:19:37.969 "method": "sock_set_default_impl", 00:19:37.969 "params": { 00:19:37.969 "impl_name": "posix" 00:19:37.969 } 00:19:37.969 }, 00:19:37.969 { 00:19:37.969 "method": "sock_impl_set_options", 00:19:37.969 "params": { 00:19:37.969 "impl_name": "ssl", 00:19:37.969 "recv_buf_size": 4096, 00:19:37.969 "send_buf_size": 4096, 00:19:37.969 "enable_recv_pipe": true, 00:19:37.969 "enable_quickack": false, 00:19:37.969 "enable_placement_id": 0, 00:19:37.969 "enable_zerocopy_send_server": true, 00:19:37.969 "enable_zerocopy_send_client": false, 00:19:37.969 "zerocopy_threshold": 0, 00:19:37.969 "tls_version": 0, 00:19:37.969 "enable_ktls": false 00:19:37.969 } 00:19:37.969 }, 00:19:37.969 { 00:19:37.969 "method": "sock_impl_set_options", 00:19:37.969 "params": { 00:19:37.969 "impl_name": "posix", 00:19:37.969 "recv_buf_size": 2097152, 00:19:37.969 "send_buf_size": 2097152, 00:19:37.969 "enable_recv_pipe": true, 00:19:37.969 "enable_quickack": false, 00:19:37.969 "enable_placement_id": 0, 00:19:37.969 "enable_zerocopy_send_server": true, 00:19:37.969 "enable_zerocopy_send_client": false, 00:19:37.969 "zerocopy_threshold": 0, 00:19:37.969 "tls_version": 0, 00:19:37.969 "enable_ktls": false 00:19:37.969 } 00:19:37.969 } 00:19:37.969 ] 00:19:37.969 }, 00:19:37.969 { 00:19:37.969 "subsystem": "vmd", 00:19:37.969 "config": [] 00:19:37.969 }, 00:19:37.969 { 00:19:37.969 "subsystem": "accel", 00:19:37.970 "config": [ 00:19:37.970 { 00:19:37.970 "method": "accel_set_options", 00:19:37.970 "params": { 00:19:37.970 "small_cache_size": 128, 00:19:37.970 "large_cache_size": 16, 00:19:37.970 "task_count": 2048, 00:19:37.970 "sequence_count": 2048, 00:19:37.970 "buf_count": 2048 00:19:37.970 } 00:19:37.970 } 00:19:37.970 ] 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "subsystem": "bdev", 00:19:37.970 "config": [ 00:19:37.970 { 00:19:37.970 "method": "bdev_set_options", 00:19:37.970 "params": { 00:19:37.970 "bdev_io_pool_size": 65535, 00:19:37.970 "bdev_io_cache_size": 256, 00:19:37.970 "bdev_auto_examine": true, 00:19:37.970 "iobuf_small_cache_size": 128, 00:19:37.970 "iobuf_large_cache_size": 16 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "bdev_raid_set_options", 00:19:37.970 "params": { 00:19:37.970 "process_window_size_kb": 1024, 00:19:37.970 "process_max_bandwidth_mb_sec": 0 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "bdev_iscsi_set_options", 00:19:37.970 "params": { 00:19:37.970 "timeout_sec": 30 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "bdev_nvme_set_options", 00:19:37.970 "params": { 00:19:37.970 "action_on_timeout": "none", 00:19:37.970 "timeout_us": 0, 00:19:37.970 "timeout_admin_us": 0, 00:19:37.970 "keep_alive_timeout_ms": 10000, 00:19:37.970 "arbitration_burst": 0, 00:19:37.970 "low_priority_weight": 0, 00:19:37.970 "medium_priority_weight": 0, 00:19:37.970 "high_priority_weight": 0, 00:19:37.970 "nvme_adminq_poll_period_us": 10000, 00:19:37.970 "nvme_ioq_poll_period_us": 0, 00:19:37.970 "io_queue_requests": 0, 00:19:37.970 "delay_cmd_submit": true, 00:19:37.970 "transport_retry_count": 4, 00:19:37.970 "bdev_retry_count": 3, 00:19:37.970 "transport_ack_timeout": 0, 00:19:37.970 "ctrlr_loss_timeout_sec": 0, 00:19:37.970 "reconnect_delay_sec": 0, 00:19:37.970 "fast_io_fail_timeout_sec": 0, 00:19:37.970 "disable_auto_failback": false, 00:19:37.970 "generate_uuids": false, 00:19:37.970 "transport_tos": 0, 00:19:37.970 "nvme_error_stat": false, 00:19:37.970 "rdma_srq_size": 0, 00:19:37.970 "io_path_stat": false, 00:19:37.970 "allow_accel_sequence": false, 00:19:37.970 "rdma_max_cq_size": 0, 00:19:37.970 "rdma_cm_event_timeout_ms": 0, 00:19:37.970 "dhchap_digests": [ 00:19:37.970 "sha256", 00:19:37.970 "sha384", 00:19:37.970 "sha512" 00:19:37.970 ], 00:19:37.970 "dhchap_dhgroups": [ 00:19:37.970 "null", 00:19:37.970 "ffdhe2048", 00:19:37.970 "ffdhe3072", 00:19:37.970 "ffdhe4096", 00:19:37.970 "ffdhe6144", 00:19:37.970 "ffdhe8192" 00:19:37.970 ] 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "bdev_nvme_set_hotplug", 00:19:37.970 "params": { 00:19:37.970 "period_us": 100000, 00:19:37.970 "enable": false 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "bdev_malloc_create", 00:19:37.970 "params": { 00:19:37.970 "name": "malloc0", 00:19:37.970 "num_blocks": 8192, 00:19:37.970 "block_size": 4096, 00:19:37.970 "physical_block_size": 4096, 00:19:37.970 "uuid": "97d1b5d3-3d42-43f2-87f8-3b2c4a06970d", 00:19:37.970 "optimal_io_boundary": 0, 00:19:37.970 "md_size": 0, 00:19:37.970 "dif_type": 0, 00:19:37.970 "dif_is_head_of_md": false, 00:19:37.970 "dif_pi_format": 0 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "bdev_wait_for_examine" 00:19:37.970 } 00:19:37.970 ] 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "subsystem": "nbd", 00:19:37.970 "config": [] 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "subsystem": "scheduler", 00:19:37.970 "config": [ 00:19:37.970 { 00:19:37.970 "method": "framework_set_scheduler", 00:19:37.970 "params": { 00:19:37.970 "name": "static" 00:19:37.970 } 00:19:37.970 } 00:19:37.970 ] 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "subsystem": "nvmf", 00:19:37.970 "config": [ 00:19:37.970 { 00:19:37.970 "method": "nvmf_set_config", 00:19:37.970 "params": { 00:19:37.970 "discovery_filter": "match_any", 00:19:37.970 "admin_cmd_passthru": { 00:19:37.970 "identify_ctrlr": false 00:19:37.970 }, 00:19:37.970 "dhchap_digests": [ 00:19:37.970 "sha256", 00:19:37.970 "sha384", 00:19:37.970 "sha512" 00:19:37.970 ], 00:19:37.970 "dhchap_dhgroups": [ 00:19:37.970 "null", 00:19:37.970 "ffdhe2048", 00:19:37.970 "ffdhe3072", 00:19:37.970 "ffdhe4096", 00:19:37.970 "ffdhe6144", 00:19:37.970 "ffdhe8192" 00:19:37.970 ] 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "nvmf_set_max_subsystems", 00:19:37.970 "params": { 00:19:37.970 "max_subsystems": 1024 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "nvmf_set_crdt", 00:19:37.970 "params": { 00:19:37.970 "crdt1": 0, 00:19:37.970 "crdt2": 0, 00:19:37.970 "crdt3": 0 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "nvmf_create_transport", 00:19:37.970 "params": { 00:19:37.970 "trtype": "TCP", 00:19:37.970 "max_queue_depth": 128, 00:19:37.970 "max_io_qpairs_per_ctrlr": 127, 00:19:37.970 "in_capsule_data_size": 4096, 00:19:37.970 "max_io_size": 131072, 00:19:37.970 "io_unit_size": 131072, 00:19:37.970 "max_aq_depth": 128, 00:19:37.970 "num_shared_buffers": 511, 00:19:37.970 "buf_cache_size": 4294967295, 00:19:37.970 "dif_insert_or_strip": false, 00:19:37.970 "zcopy": false, 00:19:37.970 "c2h_success": false, 00:19:37.970 "sock_priority": 0, 00:19:37.970 "abort_timeout_sec": 1, 00:19:37.970 "ack_timeout": 0, 00:19:37.970 "data_wr_pool_size": 0 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "nvmf_create_subsystem", 00:19:37.970 "params": { 00:19:37.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.970 "allow_any_host": false, 00:19:37.970 "serial_number": "SPDK00000000000001", 00:19:37.970 "model_number": "SPDK bdev Controller", 00:19:37.970 "max_namespaces": 10, 00:19:37.970 "min_cntlid": 1, 00:19:37.970 "max_cntlid": 65519, 00:19:37.970 "ana_reporting": false 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "nvmf_subsystem_add_host", 00:19:37.970 "params": { 00:19:37.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.970 "host": "nqn.2016-06.io.spdk:host1", 00:19:37.970 "psk": "key0" 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "nvmf_subsystem_add_ns", 00:19:37.970 "params": { 00:19:37.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.970 "namespace": { 00:19:37.970 "nsid": 1, 00:19:37.970 "bdev_name": "malloc0", 00:19:37.970 "nguid": "97D1B5D33D4243F287F83B2C4A06970D", 00:19:37.970 "uuid": "97d1b5d3-3d42-43f2-87f8-3b2c4a06970d", 00:19:37.970 "no_auto_visible": false 00:19:37.970 } 00:19:37.970 } 00:19:37.970 }, 00:19:37.970 { 00:19:37.970 "method": "nvmf_subsystem_add_listener", 00:19:37.970 "params": { 00:19:37.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.970 "listen_address": { 00:19:37.970 "trtype": "TCP", 00:19:37.970 "adrfam": "IPv4", 00:19:37.970 "traddr": "10.0.0.2", 00:19:37.970 "trsvcid": "4420" 00:19:37.970 }, 00:19:37.970 "secure_channel": true 00:19:37.970 } 00:19:37.970 } 00:19:37.970 ] 00:19:37.970 } 00:19:37.970 ] 00:19:37.970 }' 00:19:37.970 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.970 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2377038 00:19:37.970 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2377038 00:19:37.970 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:37.970 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2377038 ']' 00:19:37.970 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.970 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:37.970 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.970 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:37.971 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.229 [2024-10-17 16:47:51.703811] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:38.229 [2024-10-17 16:47:51.703900] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.229 [2024-10-17 16:47:51.766387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.229 [2024-10-17 16:47:51.823339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.229 [2024-10-17 16:47:51.823398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.229 [2024-10-17 16:47:51.823412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.229 [2024-10-17 16:47:51.823422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.229 [2024-10-17 16:47:51.823432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.229 [2024-10-17 16:47:51.824058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.487 [2024-10-17 16:47:52.075545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.487 [2024-10-17 16:47:52.107550] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:38.488 [2024-10-17 16:47:52.107826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2377190 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2377190 /var/tmp/bdevperf.sock 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2377190 ']' 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:39.423 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:39.423 "subsystems": [ 00:19:39.423 { 00:19:39.423 "subsystem": "keyring", 00:19:39.423 "config": [ 00:19:39.423 { 00:19:39.423 "method": "keyring_file_add_key", 00:19:39.423 "params": { 00:19:39.423 "name": "key0", 00:19:39.423 "path": "/tmp/tmp.ks8EnVjUHZ" 00:19:39.423 } 00:19:39.423 } 00:19:39.423 ] 00:19:39.423 }, 00:19:39.423 { 00:19:39.423 "subsystem": "iobuf", 00:19:39.423 "config": [ 00:19:39.423 { 00:19:39.423 "method": "iobuf_set_options", 00:19:39.423 "params": { 00:19:39.423 "small_pool_count": 8192, 00:19:39.423 "large_pool_count": 1024, 00:19:39.423 "small_bufsize": 8192, 00:19:39.423 "large_bufsize": 135168 00:19:39.423 } 00:19:39.423 } 00:19:39.423 ] 00:19:39.423 }, 00:19:39.423 { 00:19:39.423 "subsystem": "sock", 00:19:39.423 "config": [ 00:19:39.423 { 00:19:39.423 "method": "sock_set_default_impl", 00:19:39.423 "params": { 00:19:39.423 "impl_name": "posix" 00:19:39.423 } 00:19:39.423 }, 00:19:39.423 { 00:19:39.423 "method": "sock_impl_set_options", 00:19:39.423 "params": { 00:19:39.423 "impl_name": "ssl", 00:19:39.423 "recv_buf_size": 4096, 00:19:39.423 "send_buf_size": 4096, 00:19:39.423 "enable_recv_pipe": true, 00:19:39.423 "enable_quickack": false, 00:19:39.423 "enable_placement_id": 0, 00:19:39.423 "enable_zerocopy_send_server": true, 00:19:39.423 "enable_zerocopy_send_client": false, 00:19:39.423 "zerocopy_threshold": 0, 00:19:39.423 "tls_version": 0, 00:19:39.423 "enable_ktls": false 00:19:39.423 } 00:19:39.423 }, 00:19:39.423 { 00:19:39.423 "method": "sock_impl_set_options", 00:19:39.423 "params": { 00:19:39.423 "impl_name": "posix", 00:19:39.423 "recv_buf_size": 2097152, 00:19:39.423 "send_buf_size": 2097152, 00:19:39.423 "enable_recv_pipe": true, 00:19:39.423 "enable_quickack": false, 00:19:39.423 "enable_placement_id": 0, 00:19:39.423 "enable_zerocopy_send_server": true, 00:19:39.423 "enable_zerocopy_send_client": false, 00:19:39.423 "zerocopy_threshold": 0, 00:19:39.423 "tls_version": 0, 00:19:39.423 "enable_ktls": false 00:19:39.423 } 00:19:39.423 } 00:19:39.423 ] 00:19:39.423 }, 00:19:39.423 { 00:19:39.423 "subsystem": "vmd", 00:19:39.423 "config": [] 00:19:39.423 }, 00:19:39.423 { 00:19:39.423 "subsystem": "accel", 00:19:39.423 "config": [ 00:19:39.423 { 00:19:39.423 "method": "accel_set_options", 00:19:39.423 "params": { 00:19:39.423 "small_cache_size": 128, 00:19:39.423 "large_cache_size": 16, 00:19:39.423 "task_count": 2048, 00:19:39.423 "sequence_count": 2048, 00:19:39.423 "buf_count": 2048 00:19:39.423 } 00:19:39.423 } 00:19:39.423 ] 00:19:39.423 }, 00:19:39.423 { 00:19:39.423 "subsystem": "bdev", 00:19:39.423 "config": [ 00:19:39.423 { 00:19:39.423 "method": "bdev_set_options", 00:19:39.423 "params": { 00:19:39.423 "bdev_io_pool_size": 65535, 00:19:39.423 "bdev_io_cache_size": 256, 00:19:39.423 "bdev_auto_examine": true, 00:19:39.423 "iobuf_small_cache_size": 128, 00:19:39.423 "iobuf_large_cache_size": 16 00:19:39.423 } 00:19:39.423 }, 00:19:39.423 { 00:19:39.423 "method": "bdev_raid_set_options", 00:19:39.423 "params": { 00:19:39.423 "process_window_size_kb": 1024, 00:19:39.423 "process_max_bandwidth_mb_sec": 0 00:19:39.423 } 00:19:39.423 }, 00:19:39.423 { 00:19:39.423 "method": "bdev_iscsi_set_options", 00:19:39.423 "params": { 00:19:39.423 "timeout_sec": 30 00:19:39.423 } 00:19:39.423 }, 00:19:39.423 { 00:19:39.423 "method": "bdev_nvme_set_options", 00:19:39.423 "params": { 00:19:39.423 "action_on_timeout": "none", 00:19:39.423 "timeout_us": 0, 00:19:39.423 "timeout_admin_us": 0, 00:19:39.423 "keep_alive_timeout_ms": 10000, 00:19:39.423 "arbitration_burst": 0, 00:19:39.423 "low_priority_weight": 0, 00:19:39.423 "medium_priority_weight": 0, 00:19:39.423 "high_priority_weight": 0, 00:19:39.423 "nvme_adminq_poll_period_us": 10000, 00:19:39.423 "nvme_ioq_poll_period_us": 0, 00:19:39.423 "io_queue_requests": 512, 00:19:39.423 "delay_cmd_submit": true, 00:19:39.423 "transport_retry_count": 4, 00:19:39.423 "bdev_retry_count": 3, 00:19:39.423 "transport_ack_timeout": 0, 00:19:39.423 "ctrlr_loss_timeout_sec": 0, 00:19:39.423 "reconnect_delay_sec": 0, 00:19:39.423 "fast_io_fail_timeout_sec": 0, 00:19:39.423 "disable_auto_failback": false, 00:19:39.423 "generate_uuids": false, 00:19:39.423 "transport_tos": 0, 00:19:39.423 "nvme_error_stat": false, 00:19:39.423 "rdma_srq_size": 0, 00:19:39.423 "io_path_stat": false, 00:19:39.423 "allow_accel_sequence": false, 00:19:39.423 "rdma_max_cq_size": 0, 00:19:39.423 "rdma_cm_event_timeout_ms": 0, 00:19:39.423 "dhchap_digests": [ 00:19:39.423 "sha256", 00:19:39.423 "sha384", 00:19:39.423 "sha512" 00:19:39.423 ], 00:19:39.423 "dhchap_dhgroups": [ 00:19:39.423 "null", 00:19:39.423 "ffdhe2048", 00:19:39.423 "ffdhe3072", 00:19:39.423 "ffdhe4096", 00:19:39.423 "ffdhe6144", 00:19:39.423 "ffdhe8192" 00:19:39.423 ] 00:19:39.423 } 00:19:39.423 }, 00:19:39.423 { 00:19:39.423 "method": "bdev_nvme_attach_controller", 00:19:39.423 "params": { 00:19:39.423 "name": "TLSTEST", 00:19:39.423 "trtype": "TCP", 00:19:39.423 "adrfam": "IPv4", 00:19:39.423 "traddr": "10.0.0.2", 00:19:39.423 "trsvcid": "4420", 00:19:39.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.423 "prchk_reftag": false, 00:19:39.423 "prchk_guard": false, 00:19:39.423 "ctrlr_loss_timeout_sec": 0, 00:19:39.423 "reconnect_delay_sec": 0, 00:19:39.423 "fast_io_fail_timeout_sec": 0, 00:19:39.424 "psk": "key0", 00:19:39.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.424 "hdgst": false, 00:19:39.424 "ddgst": false, 00:19:39.424 "multipath": "multipath" 00:19:39.424 } 00:19:39.424 }, 00:19:39.424 { 00:19:39.424 "method": "bdev_nvme_set_hotplug", 00:19:39.424 "params": { 00:19:39.424 "period_us": 100000, 00:19:39.424 "enable": false 00:19:39.424 } 00:19:39.424 }, 00:19:39.424 { 00:19:39.424 "method": "bdev_wait_for_examine" 00:19:39.424 } 00:19:39.424 ] 00:19:39.424 }, 00:19:39.424 { 00:19:39.424 "subsystem": "nbd", 00:19:39.424 "config": [] 00:19:39.424 } 00:19:39.424 ] 00:19:39.424 }' 00:19:39.424 [2024-10-17 16:47:52.822888] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:39.424 [2024-10-17 16:47:52.822969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377190 ] 00:19:39.424 [2024-10-17 16:47:52.882149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.424 [2024-10-17 16:47:52.945917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.682 [2024-10-17 16:47:53.127866] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.682 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.682 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:39.682 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:39.682 Running I/O for 10 seconds... 00:19:41.988 3371.00 IOPS, 13.17 MiB/s [2024-10-17T14:47:56.612Z] 3439.50 IOPS, 13.44 MiB/s [2024-10-17T14:47:57.544Z] 3468.67 IOPS, 13.55 MiB/s [2024-10-17T14:47:58.476Z] 3478.00 IOPS, 13.59 MiB/s [2024-10-17T14:47:59.409Z] 3465.80 IOPS, 13.54 MiB/s [2024-10-17T14:48:00.784Z] 3473.83 IOPS, 13.57 MiB/s [2024-10-17T14:48:01.718Z] 3476.86 IOPS, 13.58 MiB/s [2024-10-17T14:48:02.650Z] 3470.75 IOPS, 13.56 MiB/s [2024-10-17T14:48:03.585Z] 3468.33 IOPS, 13.55 MiB/s [2024-10-17T14:48:03.585Z] 3470.80 IOPS, 13.56 MiB/s 00:19:49.895 Latency(us) 00:19:49.895 [2024-10-17T14:48:03.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.895 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:49.895 Verification LBA range: start 0x0 length 0x2000 00:19:49.895 TLSTESTn1 : 10.02 3477.26 13.58 0.00 0.00 36751.72 6213.78 53982.25 00:19:49.895 [2024-10-17T14:48:03.585Z] =================================================================================================================== 00:19:49.895 [2024-10-17T14:48:03.585Z] Total : 3477.26 13.58 0.00 0.00 36751.72 6213.78 53982.25 00:19:49.895 { 00:19:49.895 "results": [ 00:19:49.895 { 00:19:49.895 "job": "TLSTESTn1", 00:19:49.895 "core_mask": "0x4", 00:19:49.895 "workload": "verify", 00:19:49.895 "status": "finished", 00:19:49.895 "verify_range": { 00:19:49.895 "start": 0, 00:19:49.895 "length": 8192 00:19:49.895 }, 00:19:49.895 "queue_depth": 128, 00:19:49.895 "io_size": 4096, 00:19:49.895 "runtime": 10.017645, 00:19:49.895 "iops": 3477.2643670243856, 00:19:49.895 "mibps": 13.583063933689006, 00:19:49.895 "io_failed": 0, 00:19:49.895 "io_timeout": 0, 00:19:49.895 "avg_latency_us": 36751.72236167729, 00:19:49.895 "min_latency_us": 6213.783703703703, 00:19:49.895 "max_latency_us": 53982.24592592593 00:19:49.895 } 00:19:49.895 ], 00:19:49.895 "core_count": 1 00:19:49.895 } 00:19:49.895 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:49.895 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2377190 00:19:49.895 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2377190 ']' 00:19:49.895 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2377190 00:19:49.895 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:49.895 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:49.895 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2377190 00:19:49.895 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:49.895 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:49.895 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2377190' 00:19:49.895 killing process with pid 2377190 00:19:49.895 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2377190 00:19:49.895 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.895 00:19:49.895 Latency(us) 00:19:49.895 [2024-10-17T14:48:03.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.895 [2024-10-17T14:48:03.585Z] =================================================================================================================== 00:19:49.895 [2024-10-17T14:48:03.585Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.895 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2377190 00:19:50.153 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2377038 00:19:50.153 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2377038 ']' 00:19:50.153 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2377038 00:19:50.153 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.153 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.153 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2377038 00:19:50.153 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:50.153 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:50.153 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2377038' 00:19:50.153 killing process with pid 2377038 00:19:50.153 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2377038 00:19:50.153 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2377038 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2378621 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2378621 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2378621 ']' 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.412 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.412 [2024-10-17 16:48:04.029548] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:50.412 [2024-10-17 16:48:04.029663] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.412 [2024-10-17 16:48:04.098908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.671 [2024-10-17 16:48:04.157895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.671 [2024-10-17 16:48:04.157944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.671 [2024-10-17 16:48:04.157959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.671 [2024-10-17 16:48:04.157971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.671 [2024-10-17 16:48:04.158012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.671 [2024-10-17 16:48:04.158536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.671 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.671 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:50.671 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:50.671 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:50.671 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.671 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.671 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ks8EnVjUHZ 00:19:50.671 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ks8EnVjUHZ 00:19:50.671 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:50.929 [2024-10-17 16:48:04.557588] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.929 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:51.188 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:51.446 [2024-10-17 16:48:05.127170] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:51.446 [2024-10-17 16:48:05.127464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.704 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:51.963 malloc0 00:19:51.963 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.221 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ks8EnVjUHZ 00:19:52.480 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:52.738 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2378911 00:19:52.738 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.738 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2378911 /var/tmp/bdevperf.sock 00:19:52.738 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2378911 ']' 00:19:52.738 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:52.738 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.738 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.738 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.738 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.738 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.738 [2024-10-17 16:48:06.398435] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:52.738 [2024-10-17 16:48:06.398526] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378911 ] 00:19:52.996 [2024-10-17 16:48:06.457219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.996 [2024-10-17 16:48:06.517917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.996 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.996 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:52.996 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ks8EnVjUHZ 00:19:53.256 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:53.537 [2024-10-17 16:48:07.173318] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.806 nvme0n1 00:19:53.806 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:53.806 Running I/O for 1 seconds... 00:19:54.741 3210.00 IOPS, 12.54 MiB/s 00:19:54.741 Latency(us) 00:19:54.741 [2024-10-17T14:48:08.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.741 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:54.741 Verification LBA range: start 0x0 length 0x2000 00:19:54.741 nvme0n1 : 1.02 3268.48 12.77 0.00 0.00 38787.91 6941.96 53982.25 00:19:54.741 [2024-10-17T14:48:08.431Z] =================================================================================================================== 00:19:54.741 [2024-10-17T14:48:08.431Z] Total : 3268.48 12.77 0.00 0.00 38787.91 6941.96 53982.25 00:19:54.741 { 00:19:54.741 "results": [ 00:19:54.741 { 00:19:54.741 "job": "nvme0n1", 00:19:54.741 "core_mask": "0x2", 00:19:54.741 "workload": "verify", 00:19:54.741 "status": "finished", 00:19:54.741 "verify_range": { 00:19:54.741 "start": 0, 00:19:54.741 "length": 8192 00:19:54.741 }, 00:19:54.741 "queue_depth": 128, 00:19:54.741 "io_size": 4096, 00:19:54.741 "runtime": 1.02127, 00:19:54.741 "iops": 3268.479442263065, 00:19:54.741 "mibps": 12.767497821340097, 00:19:54.741 "io_failed": 0, 00:19:54.741 "io_timeout": 0, 00:19:54.741 "avg_latency_us": 38787.9083700597, 00:19:54.741 "min_latency_us": 6941.961481481481, 00:19:54.741 "max_latency_us": 53982.24592592593 00:19:54.741 } 00:19:54.741 ], 00:19:54.741 "core_count": 1 00:19:54.741 } 00:19:54.741 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2378911 00:19:54.741 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2378911 ']' 00:19:54.741 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2378911 00:19:54.741 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:54.741 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:54.741 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2378911 00:19:55.000 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:55.000 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:55.000 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2378911' 00:19:55.000 killing process with pid 2378911 00:19:55.000 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2378911 00:19:55.000 Received shutdown signal, test time was about 1.000000 seconds 00:19:55.000 00:19:55.000 Latency(us) 00:19:55.000 [2024-10-17T14:48:08.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.000 [2024-10-17T14:48:08.690Z] =================================================================================================================== 00:19:55.000 [2024-10-17T14:48:08.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.000 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2378911 00:19:55.000 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2378621 00:19:55.000 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2378621 ']' 00:19:55.000 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2378621 00:19:55.259 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:55.259 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.259 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2378621 00:19:55.259 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:55.259 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:55.259 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2378621' 00:19:55.259 killing process with pid 2378621 00:19:55.259 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2378621 00:19:55.259 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2378621 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2379614 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2379614 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2379614 ']' 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.518 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.518 [2024-10-17 16:48:09.036764] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:55.518 [2024-10-17 16:48:09.036862] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.518 [2024-10-17 16:48:09.103660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.518 [2024-10-17 16:48:09.166866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.518 [2024-10-17 16:48:09.166916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.518 [2024-10-17 16:48:09.166942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.518 [2024-10-17 16:48:09.166955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.518 [2024-10-17 16:48:09.166966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.518 [2024-10-17 16:48:09.167621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.776 [2024-10-17 16:48:09.313863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.776 malloc0 00:19:55.776 [2024-10-17 16:48:09.344767] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.776 [2024-10-17 16:48:09.345056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2379836 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2379836 /var/tmp/bdevperf.sock 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2379836 ']' 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.776 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.776 [2024-10-17 16:48:09.416384] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:55.776 [2024-10-17 16:48:09.416482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379836 ] 00:19:56.035 [2024-10-17 16:48:09.479180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.035 [2024-10-17 16:48:09.538404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.035 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.035 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:56.035 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ks8EnVjUHZ 00:19:56.293 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:56.551 [2024-10-17 16:48:10.179313] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.809 nvme0n1 00:19:56.809 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:56.809 Running I/O for 1 seconds... 00:19:57.745 3442.00 IOPS, 13.45 MiB/s 00:19:57.745 Latency(us) 00:19:57.745 [2024-10-17T14:48:11.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.745 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:57.745 Verification LBA range: start 0x0 length 0x2000 00:19:57.745 nvme0n1 : 1.02 3501.35 13.68 0.00 0.00 36231.14 6553.60 36505.98 00:19:57.745 [2024-10-17T14:48:11.435Z] =================================================================================================================== 00:19:57.745 [2024-10-17T14:48:11.435Z] Total : 3501.35 13.68 0.00 0.00 36231.14 6553.60 36505.98 00:19:57.745 { 00:19:57.745 "results": [ 00:19:57.745 { 00:19:57.745 "job": "nvme0n1", 00:19:57.745 "core_mask": "0x2", 00:19:57.745 "workload": "verify", 00:19:57.745 "status": "finished", 00:19:57.745 "verify_range": { 00:19:57.745 "start": 0, 00:19:57.745 "length": 8192 00:19:57.745 }, 00:19:57.745 "queue_depth": 128, 00:19:57.745 "io_size": 4096, 00:19:57.745 "runtime": 1.019893, 00:19:57.745 "iops": 3501.3476903949727, 00:19:57.745 "mibps": 13.677139415605362, 00:19:57.745 "io_failed": 0, 00:19:57.745 "io_timeout": 0, 00:19:57.745 "avg_latency_us": 36231.14363649564, 00:19:57.745 "min_latency_us": 6553.6, 00:19:57.745 "max_latency_us": 36505.97925925926 00:19:57.745 } 00:19:57.745 ], 00:19:57.745 "core_count": 1 00:19:57.745 } 00:19:57.745 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:57.745 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.745 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.004 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.004 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:58.004 "subsystems": [ 00:19:58.004 { 00:19:58.004 "subsystem": "keyring", 00:19:58.004 "config": [ 00:19:58.004 { 00:19:58.004 "method": "keyring_file_add_key", 00:19:58.004 "params": { 00:19:58.004 "name": "key0", 00:19:58.004 "path": "/tmp/tmp.ks8EnVjUHZ" 00:19:58.004 } 00:19:58.004 } 00:19:58.004 ] 00:19:58.004 }, 00:19:58.004 { 00:19:58.004 "subsystem": "iobuf", 00:19:58.004 "config": [ 00:19:58.004 { 00:19:58.004 "method": "iobuf_set_options", 00:19:58.004 "params": { 00:19:58.004 "small_pool_count": 8192, 00:19:58.004 "large_pool_count": 1024, 00:19:58.004 "small_bufsize": 8192, 00:19:58.004 "large_bufsize": 135168 00:19:58.004 } 00:19:58.004 } 00:19:58.004 ] 00:19:58.004 }, 00:19:58.004 { 00:19:58.004 "subsystem": "sock", 00:19:58.004 "config": [ 00:19:58.004 { 00:19:58.004 "method": "sock_set_default_impl", 00:19:58.004 "params": { 00:19:58.004 "impl_name": "posix" 00:19:58.004 } 00:19:58.004 }, 00:19:58.004 { 00:19:58.004 "method": "sock_impl_set_options", 00:19:58.004 "params": { 00:19:58.004 "impl_name": "ssl", 00:19:58.004 "recv_buf_size": 4096, 00:19:58.004 "send_buf_size": 4096, 00:19:58.004 "enable_recv_pipe": true, 00:19:58.004 "enable_quickack": false, 00:19:58.004 "enable_placement_id": 0, 00:19:58.004 "enable_zerocopy_send_server": true, 00:19:58.004 "enable_zerocopy_send_client": false, 00:19:58.004 "zerocopy_threshold": 0, 00:19:58.004 "tls_version": 0, 00:19:58.004 "enable_ktls": false 00:19:58.004 } 00:19:58.004 }, 00:19:58.004 { 00:19:58.004 "method": "sock_impl_set_options", 00:19:58.004 "params": { 00:19:58.004 "impl_name": "posix", 00:19:58.004 "recv_buf_size": 2097152, 00:19:58.004 "send_buf_size": 2097152, 00:19:58.004 "enable_recv_pipe": true, 00:19:58.004 "enable_quickack": false, 00:19:58.004 "enable_placement_id": 0, 00:19:58.004 "enable_zerocopy_send_server": true, 00:19:58.004 "enable_zerocopy_send_client": false, 00:19:58.004 "zerocopy_threshold": 0, 00:19:58.004 "tls_version": 0, 00:19:58.004 "enable_ktls": false 00:19:58.004 } 00:19:58.004 } 00:19:58.004 ] 00:19:58.004 }, 00:19:58.004 { 00:19:58.004 "subsystem": "vmd", 00:19:58.004 "config": [] 00:19:58.004 }, 00:19:58.004 { 00:19:58.004 "subsystem": "accel", 00:19:58.004 "config": [ 00:19:58.004 { 00:19:58.004 "method": "accel_set_options", 00:19:58.004 "params": { 00:19:58.004 "small_cache_size": 128, 00:19:58.004 "large_cache_size": 16, 00:19:58.004 "task_count": 2048, 00:19:58.004 "sequence_count": 2048, 00:19:58.004 "buf_count": 2048 00:19:58.004 } 00:19:58.004 } 00:19:58.004 ] 00:19:58.004 }, 00:19:58.004 { 00:19:58.004 "subsystem": "bdev", 00:19:58.004 "config": [ 00:19:58.004 { 00:19:58.004 "method": "bdev_set_options", 00:19:58.004 "params": { 00:19:58.004 "bdev_io_pool_size": 65535, 00:19:58.004 "bdev_io_cache_size": 256, 00:19:58.004 "bdev_auto_examine": true, 00:19:58.004 "iobuf_small_cache_size": 128, 00:19:58.004 "iobuf_large_cache_size": 16 00:19:58.004 } 00:19:58.004 }, 00:19:58.004 { 00:19:58.004 "method": "bdev_raid_set_options", 00:19:58.004 "params": { 00:19:58.004 "process_window_size_kb": 1024, 00:19:58.004 "process_max_bandwidth_mb_sec": 0 00:19:58.004 } 00:19:58.004 }, 00:19:58.004 { 00:19:58.004 "method": "bdev_iscsi_set_options", 00:19:58.004 "params": { 00:19:58.004 "timeout_sec": 30 00:19:58.004 } 00:19:58.004 }, 00:19:58.004 { 00:19:58.004 "method": "bdev_nvme_set_options", 00:19:58.005 "params": { 00:19:58.005 "action_on_timeout": "none", 00:19:58.005 "timeout_us": 0, 00:19:58.005 "timeout_admin_us": 0, 00:19:58.005 "keep_alive_timeout_ms": 10000, 00:19:58.005 "arbitration_burst": 0, 00:19:58.005 "low_priority_weight": 0, 00:19:58.005 "medium_priority_weight": 0, 00:19:58.005 "high_priority_weight": 0, 00:19:58.005 "nvme_adminq_poll_period_us": 10000, 00:19:58.005 "nvme_ioq_poll_period_us": 0, 00:19:58.005 "io_queue_requests": 0, 00:19:58.005 "delay_cmd_submit": true, 00:19:58.005 "transport_retry_count": 4, 00:19:58.005 "bdev_retry_count": 3, 00:19:58.005 "transport_ack_timeout": 0, 00:19:58.005 "ctrlr_loss_timeout_sec": 0, 00:19:58.005 "reconnect_delay_sec": 0, 00:19:58.005 "fast_io_fail_timeout_sec": 0, 00:19:58.005 "disable_auto_failback": false, 00:19:58.005 "generate_uuids": false, 00:19:58.005 "transport_tos": 0, 00:19:58.005 "nvme_error_stat": false, 00:19:58.005 "rdma_srq_size": 0, 00:19:58.005 "io_path_stat": false, 00:19:58.005 "allow_accel_sequence": false, 00:19:58.005 "rdma_max_cq_size": 0, 00:19:58.005 "rdma_cm_event_timeout_ms": 0, 00:19:58.005 "dhchap_digests": [ 00:19:58.005 "sha256", 00:19:58.005 "sha384", 00:19:58.005 "sha512" 00:19:58.005 ], 00:19:58.005 "dhchap_dhgroups": [ 00:19:58.005 "null", 00:19:58.005 "ffdhe2048", 00:19:58.005 "ffdhe3072", 00:19:58.005 "ffdhe4096", 00:19:58.005 "ffdhe6144", 00:19:58.005 "ffdhe8192" 00:19:58.005 ] 00:19:58.005 } 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "method": "bdev_nvme_set_hotplug", 00:19:58.005 "params": { 00:19:58.005 "period_us": 100000, 00:19:58.005 "enable": false 00:19:58.005 } 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "method": "bdev_malloc_create", 00:19:58.005 "params": { 00:19:58.005 "name": "malloc0", 00:19:58.005 "num_blocks": 8192, 00:19:58.005 "block_size": 4096, 00:19:58.005 "physical_block_size": 4096, 00:19:58.005 "uuid": "ded9f30d-b699-467c-b58a-146387ea2468", 00:19:58.005 "optimal_io_boundary": 0, 00:19:58.005 "md_size": 0, 00:19:58.005 "dif_type": 0, 00:19:58.005 "dif_is_head_of_md": false, 00:19:58.005 "dif_pi_format": 0 00:19:58.005 } 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "method": "bdev_wait_for_examine" 00:19:58.005 } 00:19:58.005 ] 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "subsystem": "nbd", 00:19:58.005 "config": [] 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "subsystem": "scheduler", 00:19:58.005 "config": [ 00:19:58.005 { 00:19:58.005 "method": "framework_set_scheduler", 00:19:58.005 "params": { 00:19:58.005 "name": "static" 00:19:58.005 } 00:19:58.005 } 00:19:58.005 ] 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "subsystem": "nvmf", 00:19:58.005 "config": [ 00:19:58.005 { 00:19:58.005 "method": "nvmf_set_config", 00:19:58.005 "params": { 00:19:58.005 "discovery_filter": "match_any", 00:19:58.005 "admin_cmd_passthru": { 00:19:58.005 "identify_ctrlr": false 00:19:58.005 }, 00:19:58.005 "dhchap_digests": [ 00:19:58.005 "sha256", 00:19:58.005 "sha384", 00:19:58.005 "sha512" 00:19:58.005 ], 00:19:58.005 "dhchap_dhgroups": [ 00:19:58.005 "null", 00:19:58.005 "ffdhe2048", 00:19:58.005 "ffdhe3072", 00:19:58.005 "ffdhe4096", 00:19:58.005 "ffdhe6144", 00:19:58.005 "ffdhe8192" 00:19:58.005 ] 00:19:58.005 } 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "method": "nvmf_set_max_subsystems", 00:19:58.005 "params": { 00:19:58.005 "max_subsystems": 1024 00:19:58.005 } 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "method": "nvmf_set_crdt", 00:19:58.005 "params": { 00:19:58.005 "crdt1": 0, 00:19:58.005 "crdt2": 0, 00:19:58.005 "crdt3": 0 00:19:58.005 } 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "method": "nvmf_create_transport", 00:19:58.005 "params": { 00:19:58.005 "trtype": "TCP", 00:19:58.005 "max_queue_depth": 128, 00:19:58.005 "max_io_qpairs_per_ctrlr": 127, 00:19:58.005 "in_capsule_data_size": 4096, 00:19:58.005 "max_io_size": 131072, 00:19:58.005 "io_unit_size": 131072, 00:19:58.005 "max_aq_depth": 128, 00:19:58.005 "num_shared_buffers": 511, 00:19:58.005 "buf_cache_size": 4294967295, 00:19:58.005 "dif_insert_or_strip": false, 00:19:58.005 "zcopy": false, 00:19:58.005 "c2h_success": false, 00:19:58.005 "sock_priority": 0, 00:19:58.005 "abort_timeout_sec": 1, 00:19:58.005 "ack_timeout": 0, 00:19:58.005 "data_wr_pool_size": 0 00:19:58.005 } 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "method": "nvmf_create_subsystem", 00:19:58.005 "params": { 00:19:58.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.005 "allow_any_host": false, 00:19:58.005 "serial_number": "00000000000000000000", 00:19:58.005 "model_number": "SPDK bdev Controller", 00:19:58.005 "max_namespaces": 32, 00:19:58.005 "min_cntlid": 1, 00:19:58.005 "max_cntlid": 65519, 00:19:58.005 "ana_reporting": false 00:19:58.005 } 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "method": "nvmf_subsystem_add_host", 00:19:58.005 "params": { 00:19:58.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.005 "host": "nqn.2016-06.io.spdk:host1", 00:19:58.005 "psk": "key0" 00:19:58.005 } 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "method": "nvmf_subsystem_add_ns", 00:19:58.005 "params": { 00:19:58.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.005 "namespace": { 00:19:58.005 "nsid": 1, 00:19:58.005 "bdev_name": "malloc0", 00:19:58.005 "nguid": "DED9F30DB699467CB58A146387EA2468", 00:19:58.005 "uuid": "ded9f30d-b699-467c-b58a-146387ea2468", 00:19:58.005 "no_auto_visible": false 00:19:58.005 } 00:19:58.005 } 00:19:58.005 }, 00:19:58.005 { 00:19:58.005 "method": "nvmf_subsystem_add_listener", 00:19:58.005 "params": { 00:19:58.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.005 "listen_address": { 00:19:58.005 "trtype": "TCP", 00:19:58.005 "adrfam": "IPv4", 00:19:58.005 "traddr": "10.0.0.2", 00:19:58.005 "trsvcid": "4420" 00:19:58.005 }, 00:19:58.005 "secure_channel": false, 00:19:58.005 "sock_impl": "ssl" 00:19:58.005 } 00:19:58.005 } 00:19:58.005 ] 00:19:58.005 } 00:19:58.005 ] 00:19:58.005 }' 00:19:58.005 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:58.264 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:58.264 "subsystems": [ 00:19:58.264 { 00:19:58.264 "subsystem": "keyring", 00:19:58.264 "config": [ 00:19:58.264 { 00:19:58.264 "method": "keyring_file_add_key", 00:19:58.264 "params": { 00:19:58.264 "name": "key0", 00:19:58.264 "path": "/tmp/tmp.ks8EnVjUHZ" 00:19:58.264 } 00:19:58.264 } 00:19:58.264 ] 00:19:58.264 }, 00:19:58.264 { 00:19:58.264 "subsystem": "iobuf", 00:19:58.264 "config": [ 00:19:58.264 { 00:19:58.264 "method": "iobuf_set_options", 00:19:58.264 "params": { 00:19:58.264 "small_pool_count": 8192, 00:19:58.264 "large_pool_count": 1024, 00:19:58.264 "small_bufsize": 8192, 00:19:58.264 "large_bufsize": 135168 00:19:58.264 } 00:19:58.264 } 00:19:58.264 ] 00:19:58.264 }, 00:19:58.264 { 00:19:58.264 "subsystem": "sock", 00:19:58.264 "config": [ 00:19:58.264 { 00:19:58.264 "method": "sock_set_default_impl", 00:19:58.264 "params": { 00:19:58.264 "impl_name": "posix" 00:19:58.264 } 00:19:58.264 }, 00:19:58.264 { 00:19:58.264 "method": "sock_impl_set_options", 00:19:58.264 "params": { 00:19:58.264 "impl_name": "ssl", 00:19:58.264 "recv_buf_size": 4096, 00:19:58.264 "send_buf_size": 4096, 00:19:58.264 "enable_recv_pipe": true, 00:19:58.264 "enable_quickack": false, 00:19:58.264 "enable_placement_id": 0, 00:19:58.264 "enable_zerocopy_send_server": true, 00:19:58.264 "enable_zerocopy_send_client": false, 00:19:58.264 "zerocopy_threshold": 0, 00:19:58.264 "tls_version": 0, 00:19:58.264 "enable_ktls": false 00:19:58.264 } 00:19:58.264 }, 00:19:58.264 { 00:19:58.264 "method": "sock_impl_set_options", 00:19:58.264 "params": { 00:19:58.264 "impl_name": "posix", 00:19:58.264 "recv_buf_size": 2097152, 00:19:58.264 "send_buf_size": 2097152, 00:19:58.264 "enable_recv_pipe": true, 00:19:58.264 "enable_quickack": false, 00:19:58.264 "enable_placement_id": 0, 00:19:58.264 "enable_zerocopy_send_server": true, 00:19:58.264 "enable_zerocopy_send_client": false, 00:19:58.264 "zerocopy_threshold": 0, 00:19:58.264 "tls_version": 0, 00:19:58.264 "enable_ktls": false 00:19:58.264 } 00:19:58.264 } 00:19:58.264 ] 00:19:58.264 }, 00:19:58.264 { 00:19:58.264 "subsystem": "vmd", 00:19:58.264 "config": [] 00:19:58.264 }, 00:19:58.264 { 00:19:58.264 "subsystem": "accel", 00:19:58.264 "config": [ 00:19:58.264 { 00:19:58.264 "method": "accel_set_options", 00:19:58.264 "params": { 00:19:58.264 "small_cache_size": 128, 00:19:58.264 "large_cache_size": 16, 00:19:58.264 "task_count": 2048, 00:19:58.264 "sequence_count": 2048, 00:19:58.264 "buf_count": 2048 00:19:58.264 } 00:19:58.264 } 00:19:58.264 ] 00:19:58.264 }, 00:19:58.264 { 00:19:58.264 "subsystem": "bdev", 00:19:58.264 "config": [ 00:19:58.264 { 00:19:58.264 "method": "bdev_set_options", 00:19:58.264 "params": { 00:19:58.264 "bdev_io_pool_size": 65535, 00:19:58.264 "bdev_io_cache_size": 256, 00:19:58.264 "bdev_auto_examine": true, 00:19:58.264 "iobuf_small_cache_size": 128, 00:19:58.264 "iobuf_large_cache_size": 16 00:19:58.264 } 00:19:58.264 }, 00:19:58.264 { 00:19:58.264 "method": "bdev_raid_set_options", 00:19:58.264 "params": { 00:19:58.264 "process_window_size_kb": 1024, 00:19:58.264 "process_max_bandwidth_mb_sec": 0 00:19:58.264 } 00:19:58.264 }, 00:19:58.264 { 00:19:58.264 "method": "bdev_iscsi_set_options", 00:19:58.264 "params": { 00:19:58.264 "timeout_sec": 30 00:19:58.264 } 00:19:58.264 }, 00:19:58.264 { 00:19:58.264 "method": "bdev_nvme_set_options", 00:19:58.264 "params": { 00:19:58.264 "action_on_timeout": "none", 00:19:58.264 "timeout_us": 0, 00:19:58.264 "timeout_admin_us": 0, 00:19:58.264 "keep_alive_timeout_ms": 10000, 00:19:58.264 "arbitration_burst": 0, 00:19:58.264 "low_priority_weight": 0, 00:19:58.264 "medium_priority_weight": 0, 00:19:58.264 "high_priority_weight": 0, 00:19:58.264 "nvme_adminq_poll_period_us": 10000, 00:19:58.264 "nvme_ioq_poll_period_us": 0, 00:19:58.264 "io_queue_requests": 512, 00:19:58.264 "delay_cmd_submit": true, 00:19:58.264 "transport_retry_count": 4, 00:19:58.264 "bdev_retry_count": 3, 00:19:58.264 "transport_ack_timeout": 0, 00:19:58.264 "ctrlr_loss_timeout_sec": 0, 00:19:58.264 "reconnect_delay_sec": 0, 00:19:58.264 "fast_io_fail_timeout_sec": 0, 00:19:58.264 "disable_auto_failback": false, 00:19:58.264 "generate_uuids": false, 00:19:58.264 "transport_tos": 0, 00:19:58.264 "nvme_error_stat": false, 00:19:58.264 "rdma_srq_size": 0, 00:19:58.264 "io_path_stat": false, 00:19:58.264 "allow_accel_sequence": false, 00:19:58.264 "rdma_max_cq_size": 0, 00:19:58.264 "rdma_cm_event_timeout_ms": 0, 00:19:58.264 "dhchap_digests": [ 00:19:58.264 "sha256", 00:19:58.264 "sha384", 00:19:58.264 "sha512" 00:19:58.264 ], 00:19:58.264 "dhchap_dhgroups": [ 00:19:58.264 "null", 00:19:58.264 "ffdhe2048", 00:19:58.264 "ffdhe3072", 00:19:58.264 "ffdhe4096", 00:19:58.264 "ffdhe6144", 00:19:58.264 "ffdhe8192" 00:19:58.264 ] 00:19:58.264 } 00:19:58.265 }, 00:19:58.265 { 00:19:58.265 "method": "bdev_nvme_attach_controller", 00:19:58.265 "params": { 00:19:58.265 "name": "nvme0", 00:19:58.265 "trtype": "TCP", 00:19:58.265 "adrfam": "IPv4", 00:19:58.265 "traddr": "10.0.0.2", 00:19:58.265 "trsvcid": "4420", 00:19:58.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.265 "prchk_reftag": false, 00:19:58.265 "prchk_guard": false, 00:19:58.265 "ctrlr_loss_timeout_sec": 0, 00:19:58.265 "reconnect_delay_sec": 0, 00:19:58.265 "fast_io_fail_timeout_sec": 0, 00:19:58.265 "psk": "key0", 00:19:58.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.265 "hdgst": false, 00:19:58.265 "ddgst": false, 00:19:58.265 "multipath": "multipath" 00:19:58.265 } 00:19:58.265 }, 00:19:58.265 { 00:19:58.265 "method": "bdev_nvme_set_hotplug", 00:19:58.265 "params": { 00:19:58.265 "period_us": 100000, 00:19:58.265 "enable": false 00:19:58.265 } 00:19:58.265 }, 00:19:58.265 { 00:19:58.265 "method": "bdev_enable_histogram", 00:19:58.265 "params": { 00:19:58.265 "name": "nvme0n1", 00:19:58.265 "enable": true 00:19:58.265 } 00:19:58.265 }, 00:19:58.265 { 00:19:58.265 "method": "bdev_wait_for_examine" 00:19:58.265 } 00:19:58.265 ] 00:19:58.265 }, 00:19:58.265 { 00:19:58.265 "subsystem": "nbd", 00:19:58.265 "config": [] 00:19:58.265 } 00:19:58.265 ] 00:19:58.265 }' 00:19:58.265 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2379836 00:19:58.265 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2379836 ']' 00:19:58.265 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2379836 00:19:58.265 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:58.265 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:58.265 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2379836 00:19:58.265 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:58.265 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:58.265 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2379836' 00:19:58.265 killing process with pid 2379836 00:19:58.265 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2379836 00:19:58.265 Received shutdown signal, test time was about 1.000000 seconds 00:19:58.265 00:19:58.265 Latency(us) 00:19:58.265 [2024-10-17T14:48:11.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.265 [2024-10-17T14:48:11.955Z] =================================================================================================================== 00:19:58.265 [2024-10-17T14:48:11.955Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.265 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2379836 00:19:58.524 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2379614 00:19:58.524 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2379614 ']' 00:19:58.524 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2379614 00:19:58.524 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:58.524 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:58.524 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2379614 00:19:58.524 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:58.524 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:58.524 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2379614' 00:19:58.524 killing process with pid 2379614 00:19:58.524 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2379614 00:19:58.524 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2379614 00:19:58.783 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:58.783 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:58.783 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:58.783 "subsystems": [ 00:19:58.783 { 00:19:58.783 "subsystem": "keyring", 00:19:58.783 "config": [ 00:19:58.783 { 00:19:58.783 "method": "keyring_file_add_key", 00:19:58.783 "params": { 00:19:58.783 "name": "key0", 00:19:58.783 "path": "/tmp/tmp.ks8EnVjUHZ" 00:19:58.783 } 00:19:58.783 } 00:19:58.783 ] 00:19:58.783 }, 00:19:58.783 { 00:19:58.783 "subsystem": "iobuf", 00:19:58.783 "config": [ 00:19:58.783 { 00:19:58.783 "method": "iobuf_set_options", 00:19:58.783 "params": { 00:19:58.783 "small_pool_count": 8192, 00:19:58.783 "large_pool_count": 1024, 00:19:58.783 "small_bufsize": 8192, 00:19:58.783 "large_bufsize": 135168 00:19:58.783 } 00:19:58.783 } 00:19:58.783 ] 00:19:58.783 }, 00:19:58.783 { 00:19:58.783 "subsystem": "sock", 00:19:58.783 "config": [ 00:19:58.783 { 00:19:58.783 "method": "sock_set_default_impl", 00:19:58.783 "params": { 00:19:58.783 "impl_name": "posix" 00:19:58.783 } 00:19:58.783 }, 00:19:58.783 { 00:19:58.783 "method": "sock_impl_set_options", 00:19:58.783 "params": { 00:19:58.783 "impl_name": "ssl", 00:19:58.783 "recv_buf_size": 4096, 00:19:58.783 "send_buf_size": 4096, 00:19:58.783 "enable_recv_pipe": true, 00:19:58.783 "enable_quickack": false, 00:19:58.783 "enable_placement_id": 0, 00:19:58.783 "enable_zerocopy_send_server": true, 00:19:58.783 "enable_zerocopy_send_client": false, 00:19:58.783 "zerocopy_threshold": 0, 00:19:58.783 "tls_version": 0, 00:19:58.783 "enable_ktls": false 00:19:58.783 } 00:19:58.783 }, 00:19:58.783 { 00:19:58.783 "method": "sock_impl_set_options", 00:19:58.783 "params": { 00:19:58.783 "impl_name": "posix", 00:19:58.783 "recv_buf_size": 2097152, 00:19:58.783 "send_buf_size": 2097152, 00:19:58.783 "enable_recv_pipe": true, 00:19:58.783 "enable_quickack": false, 00:19:58.783 "enable_placement_id": 0, 00:19:58.783 "enable_zerocopy_send_server": true, 00:19:58.783 "enable_zerocopy_send_client": false, 00:19:58.783 "zerocopy_threshold": 0, 00:19:58.783 "tls_version": 0, 00:19:58.783 "enable_ktls": false 00:19:58.783 } 00:19:58.783 } 00:19:58.783 ] 00:19:58.783 }, 00:19:58.783 { 00:19:58.783 "subsystem": "vmd", 00:19:58.783 "config": [] 00:19:58.783 }, 00:19:58.783 { 00:19:58.783 "subsystem": "accel", 00:19:58.783 "config": [ 00:19:58.783 { 00:19:58.783 "method": "accel_set_options", 00:19:58.784 "params": { 00:19:58.784 "small_cache_size": 128, 00:19:58.784 "large_cache_size": 16, 00:19:58.784 "task_count": 2048, 00:19:58.784 "sequence_count": 2048, 00:19:58.784 "buf_count": 2048 00:19:58.784 } 00:19:58.784 } 00:19:58.784 ] 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "subsystem": "bdev", 00:19:58.784 "config": [ 00:19:58.784 { 00:19:58.784 "method": "bdev_set_options", 00:19:58.784 "params": { 00:19:58.784 "bdev_io_pool_size": 65535, 00:19:58.784 "bdev_io_cache_size": 256, 00:19:58.784 "bdev_auto_examine": true, 00:19:58.784 "iobuf_small_cache_size": 128, 00:19:58.784 "iobuf_large_cache_size": 16 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "bdev_raid_set_options", 00:19:58.784 "params": { 00:19:58.784 "process_window_size_kb": 1024, 00:19:58.784 "process_max_bandwidth_mb_sec": 0 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "bdev_iscsi_set_options", 00:19:58.784 "params": { 00:19:58.784 "timeout_sec": 30 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "bdev_nvme_set_options", 00:19:58.784 "params": { 00:19:58.784 "action_on_timeout": "none", 00:19:58.784 "timeout_us": 0, 00:19:58.784 "timeout_admin_us": 0, 00:19:58.784 "keep_alive_timeout_ms": 10000, 00:19:58.784 "arbitration_burst": 0, 00:19:58.784 "low_priority_weight": 0, 00:19:58.784 "medium_priority_weight": 0, 00:19:58.784 "high_priority_weight": 0, 00:19:58.784 "nvme_adminq_poll_period_us": 10000, 00:19:58.784 "nvme_ioq_poll_period_us": 0, 00:19:58.784 "io_queue_requests": 0, 00:19:58.784 "delay_cmd_submit": true, 00:19:58.784 "transport_retry_count": 4, 00:19:58.784 "bdev_retry_count": 3, 00:19:58.784 "transport_ack_timeout": 0, 00:19:58.784 "ctrlr_loss_timeout_sec": 0, 00:19:58.784 "reconnect_delay_sec": 0, 00:19:58.784 "fast_io_fail_timeout_sec": 0, 00:19:58.784 "disable_auto_failback": false, 00:19:58.784 "generate_uuids": false, 00:19:58.784 "transport_tos": 0, 00:19:58.784 "nvme_error_stat": false, 00:19:58.784 "rdma_srq_size": 0, 00:19:58.784 "io_path_stat": false, 00:19:58.784 "allow_accel_sequence": false, 00:19:58.784 "rdma_max_cq_size": 0, 00:19:58.784 "rdma_cm_event_timeout_ms": 0, 00:19:58.784 "dhchap_digests": [ 00:19:58.784 "sha256", 00:19:58.784 "sha384", 00:19:58.784 "sha512" 00:19:58.784 ], 00:19:58.784 "dhchap_dhgroups": [ 00:19:58.784 "null", 00:19:58.784 "ffdhe2048", 00:19:58.784 "ffdhe3072", 00:19:58.784 "ffdhe4096", 00:19:58.784 "ffdhe6144", 00:19:58.784 "ffdhe8192" 00:19:58.784 ] 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "bdev_nvme_set_hotplug", 00:19:58.784 "params": { 00:19:58.784 "period_us": 100000, 00:19:58.784 "enable": false 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "bdev_malloc_create", 00:19:58.784 "params": { 00:19:58.784 "name": "malloc0", 00:19:58.784 "num_blocks": 8192, 00:19:58.784 "block_size": 4096, 00:19:58.784 "physical_block_size": 4096, 00:19:58.784 "uuid": "ded9f30d-b699-467c-b58a-146387ea2468", 00:19:58.784 "optimal_io_boundary": 0, 00:19:58.784 "md_size": 0, 00:19:58.784 "dif_type": 0, 00:19:58.784 "dif_is_head_of_md": false, 00:19:58.784 "dif_pi_format": 0 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "bdev_wait_for_examine" 00:19:58.784 } 00:19:58.784 ] 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "subsystem": "nbd", 00:19:58.784 "config": [] 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "subsystem": "scheduler", 00:19:58.784 "config": [ 00:19:58.784 { 00:19:58.784 "method": "framework_set_scheduler", 00:19:58.784 "params": { 00:19:58.784 "name": "static" 00:19:58.784 } 00:19:58.784 } 00:19:58.784 ] 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "subsystem": "nvmf", 00:19:58.784 "config": [ 00:19:58.784 { 00:19:58.784 "method": "nvmf_set_config", 00:19:58.784 "params": { 00:19:58.784 "discovery_filter": "match_any", 00:19:58.784 "admin_cmd_passthru": { 00:19:58.784 "identify_ctrlr": false 00:19:58.784 }, 00:19:58.784 "dhchap_digests": [ 00:19:58.784 "sha256", 00:19:58.784 "sha384", 00:19:58.784 "sha512" 00:19:58.784 ], 00:19:58.784 "dhchap_dhgroups": [ 00:19:58.784 "null", 00:19:58.784 "ffdhe2048", 00:19:58.784 "ffdhe3072", 00:19:58.784 "ffdhe4096", 00:19:58.784 "ffdhe6144", 00:19:58.784 "ffdhe8192" 00:19:58.784 ] 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "nvmf_set_max_subsystems", 00:19:58.784 "params": { 00:19:58.784 "max_subsystems": 1024 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "nvmf_set_crdt", 00:19:58.784 "params": { 00:19:58.784 "crdt1": 0, 00:19:58.784 "crdt2": 0, 00:19:58.784 "crdt3": 0 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "nvmf_create_transport", 00:19:58.784 "params": { 00:19:58.784 "trtype": "TCP", 00:19:58.784 "max_queue_depth": 128, 00:19:58.784 "max_io_qpairs_per_ctrlr": 127, 00:19:58.784 "in_capsule_data_size": 4096, 00:19:58.784 "max_io_size": 131072, 00:19:58.784 "io_unit_size": 131072, 00:19:58.784 "max_aq_depth": 128, 00:19:58.784 "num_shared_buffers": 511, 00:19:58.784 "buf_cache_size": 4294967295, 00:19:58.784 "dif_insert_or_strip": false, 00:19:58.784 "zcopy": false, 00:19:58.784 "c2h_success": false, 00:19:58.784 "sock_priority": 0, 00:19:58.784 "abort_timeout_sec": 1, 00:19:58.784 "ack_timeout": 0, 00:19:58.784 "data_wr_pool_size": 0 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "nvmf_create_subsystem", 00:19:58.784 "params": { 00:19:58.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.784 "allow_any_host": false, 00:19:58.784 "serial_number": "00000000000000000000", 00:19:58.784 "model_number": "SPDK bdev Controller", 00:19:58.784 "max_namespaces": 32, 00:19:58.784 "min_cntlid": 1, 00:19:58.784 "max_cntlid": 65519, 00:19:58.784 "ana_reporting": false 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "nvmf_subsystem_add_host", 00:19:58.784 "params": { 00:19:58.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.784 "host": "nqn.2016-06.io.spdk:host1", 00:19:58.784 "psk": "key0" 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "nvmf_subsystem_add_ns", 00:19:58.784 "params": { 00:19:58.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.784 "namespace": { 00:19:58.784 "nsid": 1, 00:19:58.784 "bdev_name": "malloc0", 00:19:58.784 "nguid": "DED9F30DB699467CB58A146387EA2468", 00:19:58.784 "uuid": "ded9f30d-b699-467c-b58a-146387ea2468", 00:19:58.784 "no_auto_visible": false 00:19:58.784 } 00:19:58.784 } 00:19:58.784 }, 00:19:58.784 { 00:19:58.784 "method": "nvmf_subsystem_add_listener", 00:19:58.784 "params": { 00:19:58.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.784 "listen_address": { 00:19:58.784 "trtype": "TCP", 00:19:58.784 "adrfam": "IPv4", 00:19:58.784 "traddr": "10.0.0.2", 00:19:58.784 "trsvcid": "4420" 00:19:58.784 }, 00:19:58.784 "secure_channel": false, 00:19:58.784 "sock_impl": "ssl" 00:19:58.784 } 00:19:58.784 } 00:19:58.784 ] 00:19:58.784 } 00:19:58.784 ] 00:19:58.784 }' 00:19:58.784 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.784 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.784 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2380136 00:19:58.784 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:58.784 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2380136 00:19:58.784 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2380136 ']' 00:19:58.784 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.784 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.784 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.784 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.784 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.044 [2024-10-17 16:48:12.495767] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:19:59.044 [2024-10-17 16:48:12.495864] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.044 [2024-10-17 16:48:12.557906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.044 [2024-10-17 16:48:12.614402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.044 [2024-10-17 16:48:12.614456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.044 [2024-10-17 16:48:12.614479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.044 [2024-10-17 16:48:12.614497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.044 [2024-10-17 16:48:12.614507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.044 [2024-10-17 16:48:12.615155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.302 [2024-10-17 16:48:12.851708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.302 [2024-10-17 16:48:12.883736] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:59.302 [2024-10-17 16:48:12.883968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.867 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:59.867 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:59.867 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:59.867 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:59.867 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.126 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.126 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2380284 00:20:00.126 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2380284 /var/tmp/bdevperf.sock 00:20:00.126 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2380284 ']' 00:20:00.126 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.126 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:00.126 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.126 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.126 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:00.126 "subsystems": [ 00:20:00.126 { 00:20:00.126 "subsystem": "keyring", 00:20:00.126 "config": [ 00:20:00.126 { 00:20:00.126 "method": "keyring_file_add_key", 00:20:00.126 "params": { 00:20:00.126 "name": "key0", 00:20:00.126 "path": "/tmp/tmp.ks8EnVjUHZ" 00:20:00.126 } 00:20:00.126 } 00:20:00.126 ] 00:20:00.126 }, 00:20:00.126 { 00:20:00.126 "subsystem": "iobuf", 00:20:00.126 "config": [ 00:20:00.126 { 00:20:00.126 "method": "iobuf_set_options", 00:20:00.126 "params": { 00:20:00.126 "small_pool_count": 8192, 00:20:00.126 "large_pool_count": 1024, 00:20:00.126 "small_bufsize": 8192, 00:20:00.126 "large_bufsize": 135168 00:20:00.126 } 00:20:00.126 } 00:20:00.126 ] 00:20:00.126 }, 00:20:00.126 { 00:20:00.126 "subsystem": "sock", 00:20:00.126 "config": [ 00:20:00.126 { 00:20:00.126 "method": "sock_set_default_impl", 00:20:00.126 "params": { 00:20:00.126 "impl_name": "posix" 00:20:00.126 } 00:20:00.126 }, 00:20:00.126 { 00:20:00.126 "method": "sock_impl_set_options", 00:20:00.126 "params": { 00:20:00.126 "impl_name": "ssl", 00:20:00.126 "recv_buf_size": 4096, 00:20:00.126 "send_buf_size": 4096, 00:20:00.126 "enable_recv_pipe": true, 00:20:00.126 "enable_quickack": false, 00:20:00.126 "enable_placement_id": 0, 00:20:00.126 "enable_zerocopy_send_server": true, 00:20:00.126 "enable_zerocopy_send_client": false, 00:20:00.126 "zerocopy_threshold": 0, 00:20:00.126 "tls_version": 0, 00:20:00.126 "enable_ktls": false 00:20:00.126 } 00:20:00.126 }, 00:20:00.126 { 00:20:00.126 "method": "sock_impl_set_options", 00:20:00.126 "params": { 00:20:00.126 "impl_name": "posix", 00:20:00.126 "recv_buf_size": 2097152, 00:20:00.126 "send_buf_size": 2097152, 00:20:00.126 "enable_recv_pipe": true, 00:20:00.126 "enable_quickack": false, 00:20:00.126 "enable_placement_id": 0, 00:20:00.126 "enable_zerocopy_send_server": true, 00:20:00.126 "enable_zerocopy_send_client": false, 00:20:00.126 "zerocopy_threshold": 0, 00:20:00.126 "tls_version": 0, 00:20:00.126 "enable_ktls": false 00:20:00.126 } 00:20:00.126 } 00:20:00.126 ] 00:20:00.126 }, 00:20:00.126 { 00:20:00.126 "subsystem": "vmd", 00:20:00.126 "config": [] 00:20:00.126 }, 00:20:00.126 { 00:20:00.126 "subsystem": "accel", 00:20:00.126 "config": [ 00:20:00.126 { 00:20:00.126 "method": "accel_set_options", 00:20:00.126 "params": { 00:20:00.126 "small_cache_size": 128, 00:20:00.126 "large_cache_size": 16, 00:20:00.126 "task_count": 2048, 00:20:00.126 "sequence_count": 2048, 00:20:00.126 "buf_count": 2048 00:20:00.126 } 00:20:00.126 } 00:20:00.126 ] 00:20:00.126 }, 00:20:00.126 { 00:20:00.126 "subsystem": "bdev", 00:20:00.126 "config": [ 00:20:00.126 { 00:20:00.126 "method": "bdev_set_options", 00:20:00.126 "params": { 00:20:00.126 "bdev_io_pool_size": 65535, 00:20:00.126 "bdev_io_cache_size": 256, 00:20:00.126 "bdev_auto_examine": true, 00:20:00.126 "iobuf_small_cache_size": 128, 00:20:00.126 "iobuf_large_cache_size": 16 00:20:00.126 } 00:20:00.126 }, 00:20:00.126 { 00:20:00.126 "method": "bdev_raid_set_options", 00:20:00.126 "params": { 00:20:00.126 "process_window_size_kb": 1024, 00:20:00.126 "process_max_bandwidth_mb_sec": 0 00:20:00.126 } 00:20:00.126 }, 00:20:00.126 { 00:20:00.126 "method": "bdev_iscsi_set_options", 00:20:00.126 "params": { 00:20:00.126 "timeout_sec": 30 00:20:00.126 } 00:20:00.126 }, 00:20:00.126 { 00:20:00.126 "method": "bdev_nvme_set_options", 00:20:00.126 "params": { 00:20:00.126 "action_on_timeout": "none", 00:20:00.126 "timeout_us": 0, 00:20:00.126 "timeout_admin_us": 0, 00:20:00.126 "keep_alive_timeout_ms": 10000, 00:20:00.126 "arbitration_burst": 0, 00:20:00.126 "low_priority_weight": 0, 00:20:00.126 "medium_priority_weight": 0, 00:20:00.126 "high_priority_weight": 0, 00:20:00.126 "nvme_adminq_poll_period_us": 10000, 00:20:00.126 "nvme_ioq_poll_period_us": 0, 00:20:00.126 "io_queue_requests": 512, 00:20:00.126 "delay_cmd_submit": true, 00:20:00.126 "transport_retry_count": 4, 00:20:00.126 "bdev_retry_count": 3, 00:20:00.126 "transport_ack_timeout": 0, 00:20:00.126 "ctrlr_loss_timeout_sec": 0, 00:20:00.126 "reconnect_delay_sec": 0, 00:20:00.126 "fast_io_fail_timeout_sec": 0, 00:20:00.126 "disable_auto_failback": false, 00:20:00.126 "generate_uuids": false, 00:20:00.126 "transport_tos": 0, 00:20:00.126 "nvme_error_stat": false, 00:20:00.126 "rdma_srq_size": 0, 00:20:00.126 "io_path_stat": false, 00:20:00.126 "allow_accel_sequence": false, 00:20:00.126 "rdma_max_cq_size": 0, 00:20:00.126 "rdma_cm_event_timeout_ms": 0, 00:20:00.126 "dhchap_digests": [ 00:20:00.126 "sha256", 00:20:00.126 "sha384", 00:20:00.126 "sha512" 00:20:00.126 ], 00:20:00.127 "dhchap_dhgroups": [ 00:20:00.127 "null", 00:20:00.127 "ffdhe2048", 00:20:00.127 "ffdhe3072", 00:20:00.127 "ffdhe4096", 00:20:00.127 "ffdhe6144", 00:20:00.127 "ffdhe8192" 00:20:00.127 ] 00:20:00.127 } 00:20:00.127 }, 00:20:00.127 { 00:20:00.127 "method": "bdev_nvme_attach_controller", 00:20:00.127 "params": { 00:20:00.127 "name": "nvme0", 00:20:00.127 "trtype": "TCP", 00:20:00.127 "adrfam": "IPv4", 00:20:00.127 "traddr": "10.0.0.2", 00:20:00.127 "trsvcid": "4420", 00:20:00.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.127 "prchk_reftag": false, 00:20:00.127 "prchk_guard": false, 00:20:00.127 "ctrlr_loss_timeout_sec": 0, 00:20:00.127 "reconnect_delay_sec": 0, 00:20:00.127 "fast_io_fail_timeout_sec": 0, 00:20:00.127 "psk": "key0", 00:20:00.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.127 "hdgst": false, 00:20:00.127 "ddgst": false, 00:20:00.127 "multipath": "multipath" 00:20:00.127 } 00:20:00.127 }, 00:20:00.127 { 00:20:00.127 "method": "bdev_nvme_set_hotplug", 00:20:00.127 "params": { 00:20:00.127 "period_us": 100000, 00:20:00.127 "enable": false 00:20:00.127 } 00:20:00.127 }, 00:20:00.127 { 00:20:00.127 "method": "bdev_enable_histogram", 00:20:00.127 "params": { 00:20:00.127 "name": "nvme0n1", 00:20:00.127 "enable": true 00:20:00.127 } 00:20:00.127 }, 00:20:00.127 { 00:20:00.127 "method": "bdev_wait_for_examine" 00:20:00.127 } 00:20:00.127 ] 00:20:00.127 }, 00:20:00.127 { 00:20:00.127 "subsystem": "nbd", 00:20:00.127 "config": [] 00:20:00.127 } 00:20:00.127 ] 00:20:00.127 }' 00:20:00.127 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.127 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.127 [2024-10-17 16:48:13.628143] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:20:00.127 [2024-10-17 16:48:13.628246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380284 ] 00:20:00.127 [2024-10-17 16:48:13.689341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.127 [2024-10-17 16:48:13.753097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.385 [2024-10-17 16:48:13.931217] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.385 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.385 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:00.385 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:00.385 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:00.644 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.644 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:00.903 Running I/O for 1 seconds... 00:20:01.837 3334.00 IOPS, 13.02 MiB/s 00:20:01.837 Latency(us) 00:20:01.837 [2024-10-17T14:48:15.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.837 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:01.837 Verification LBA range: start 0x0 length 0x2000 00:20:01.837 nvme0n1 : 1.02 3381.41 13.21 0.00 0.00 37493.87 7427.41 35535.08 00:20:01.837 [2024-10-17T14:48:15.527Z] =================================================================================================================== 00:20:01.837 [2024-10-17T14:48:15.527Z] Total : 3381.41 13.21 0.00 0.00 37493.87 7427.41 35535.08 00:20:01.837 { 00:20:01.837 "results": [ 00:20:01.837 { 00:20:01.837 "job": "nvme0n1", 00:20:01.837 "core_mask": "0x2", 00:20:01.837 "workload": "verify", 00:20:01.837 "status": "finished", 00:20:01.837 "verify_range": { 00:20:01.837 "start": 0, 00:20:01.837 "length": 8192 00:20:01.837 }, 00:20:01.837 "queue_depth": 128, 00:20:01.837 "io_size": 4096, 00:20:01.837 "runtime": 1.023834, 00:20:01.837 "iops": 3381.4075328617723, 00:20:01.837 "mibps": 13.208623175241298, 00:20:01.837 "io_failed": 0, 00:20:01.837 "io_timeout": 0, 00:20:01.837 "avg_latency_us": 37493.870973318786, 00:20:01.837 "min_latency_us": 7427.413333333333, 00:20:01.837 "max_latency_us": 35535.07555555556 00:20:01.837 } 00:20:01.837 ], 00:20:01.837 "core_count": 1 00:20:01.837 } 00:20:01.838 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:01.838 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:01.838 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:01.838 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:01.838 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:01.838 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:01.838 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:01.838 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:01.838 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:01.838 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:01.838 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:01.838 nvmf_trace.0 00:20:02.096 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:02.096 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2380284 00:20:02.096 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2380284 ']' 00:20:02.096 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2380284 00:20:02.096 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:02.096 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:02.096 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2380284 00:20:02.096 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:02.096 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:02.096 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2380284' 00:20:02.096 killing process with pid 2380284 00:20:02.096 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2380284 00:20:02.096 Received shutdown signal, test time was about 1.000000 seconds 00:20:02.096 00:20:02.096 Latency(us) 00:20:02.096 [2024-10-17T14:48:15.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.096 [2024-10-17T14:48:15.786Z] =================================================================================================================== 00:20:02.096 [2024-10-17T14:48:15.786Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.096 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2380284 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:02.354 rmmod nvme_tcp 00:20:02.354 rmmod nvme_fabrics 00:20:02.354 rmmod nvme_keyring 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 2380136 ']' 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 2380136 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2380136 ']' 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2380136 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2380136 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2380136' 00:20:02.354 killing process with pid 2380136 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2380136 00:20:02.354 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2380136 00:20:02.613 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:02.613 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:02.613 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:02.613 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:02.613 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:20:02.613 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:02.613 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:20:02.613 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:02.613 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:02.613 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.613 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.613 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.rbD5lvAYub /tmp/tmp.LY3SAp84rQ /tmp/tmp.ks8EnVjUHZ 00:20:05.148 00:20:05.148 real 1m23.006s 00:20:05.148 user 2m19.935s 00:20:05.148 sys 0m24.549s 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.148 ************************************ 00:20:05.148 END TEST nvmf_tls 00:20:05.148 ************************************ 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:05.148 ************************************ 00:20:05.148 START TEST nvmf_fips 00:20:05.148 ************************************ 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:05.148 * Looking for test storage... 00:20:05.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:05.148 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:05.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.149 --rc genhtml_branch_coverage=1 00:20:05.149 --rc genhtml_function_coverage=1 00:20:05.149 --rc genhtml_legend=1 00:20:05.149 --rc geninfo_all_blocks=1 00:20:05.149 --rc geninfo_unexecuted_blocks=1 00:20:05.149 00:20:05.149 ' 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:05.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.149 --rc genhtml_branch_coverage=1 00:20:05.149 --rc genhtml_function_coverage=1 00:20:05.149 --rc genhtml_legend=1 00:20:05.149 --rc geninfo_all_blocks=1 00:20:05.149 --rc geninfo_unexecuted_blocks=1 00:20:05.149 00:20:05.149 ' 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:05.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.149 --rc genhtml_branch_coverage=1 00:20:05.149 --rc genhtml_function_coverage=1 00:20:05.149 --rc genhtml_legend=1 00:20:05.149 --rc geninfo_all_blocks=1 00:20:05.149 --rc geninfo_unexecuted_blocks=1 00:20:05.149 00:20:05.149 ' 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:05.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.149 --rc genhtml_branch_coverage=1 00:20:05.149 --rc genhtml_function_coverage=1 00:20:05.149 --rc genhtml_legend=1 00:20:05.149 --rc geninfo_all_blocks=1 00:20:05.149 --rc geninfo_unexecuted_blocks=1 00:20:05.149 00:20:05.149 ' 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:05.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:05.149 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:05.150 Error setting digest 00:20:05.150 40E29A65737F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:05.150 40E29A65737F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:05.150 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.050 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:07.051 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:07.051 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:07.051 Found net devices under 0000:09:00.0: cvl_0_0 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:07.051 Found net devices under 0000:09:00.1: cvl_0_1 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.051 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.309 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.309 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:07.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:20:07.310 00:20:07.310 --- 10.0.0.2 ping statistics --- 00:20:07.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.310 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:20:07.310 00:20:07.310 --- 10.0.0.1 ping statistics --- 00:20:07.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.310 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=2382640 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 2382640 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2382640 ']' 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.310 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:07.310 [2024-10-17 16:48:20.914448] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:20:07.310 [2024-10-17 16:48:20.914523] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.310 [2024-10-17 16:48:20.979301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.568 [2024-10-17 16:48:21.040912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.568 [2024-10-17 16:48:21.040955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.568 [2024-10-17 16:48:21.040998] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.568 [2024-10-17 16:48:21.041019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.568 [2024-10-17 16:48:21.041036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.568 [2024-10-17 16:48:21.041626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.uuq 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.uuq 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.uuq 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.uuq 00:20:07.568 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:07.826 [2024-10-17 16:48:21.492301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.826 [2024-10-17 16:48:21.508297] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.826 [2024-10-17 16:48:21.508531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.084 malloc0 00:20:08.084 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:08.084 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2382675 00:20:08.084 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:08.084 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2382675 /var/tmp/bdevperf.sock 00:20:08.084 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2382675 ']' 00:20:08.084 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.084 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.084 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.084 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.084 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.084 [2024-10-17 16:48:21.642936] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:20:08.084 [2024-10-17 16:48:21.643056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382675 ] 00:20:08.084 [2024-10-17 16:48:21.702916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.084 [2024-10-17 16:48:21.762596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.343 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.343 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:08.343 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.uuq 00:20:08.599 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:08.857 [2024-10-17 16:48:22.382639] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:08.857 TLSTESTn1 00:20:08.857 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.114 Running I/O for 10 seconds... 00:20:10.979 3254.00 IOPS, 12.71 MiB/s [2024-10-17T14:48:26.042Z] 3294.50 IOPS, 12.87 MiB/s [2024-10-17T14:48:26.974Z] 3321.33 IOPS, 12.97 MiB/s [2024-10-17T14:48:27.905Z] 3326.75 IOPS, 13.00 MiB/s [2024-10-17T14:48:28.841Z] 3335.60 IOPS, 13.03 MiB/s [2024-10-17T14:48:29.774Z] 3343.67 IOPS, 13.06 MiB/s [2024-10-17T14:48:30.710Z] 3349.29 IOPS, 13.08 MiB/s [2024-10-17T14:48:31.645Z] 3358.75 IOPS, 13.12 MiB/s [2024-10-17T14:48:33.018Z] 3360.78 IOPS, 13.13 MiB/s [2024-10-17T14:48:33.018Z] 3362.10 IOPS, 13.13 MiB/s 00:20:19.328 Latency(us) 00:20:19.328 [2024-10-17T14:48:33.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.328 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:19.328 Verification LBA range: start 0x0 length 0x2000 00:20:19.328 TLSTESTn1 : 10.03 3364.28 13.14 0.00 0.00 37968.40 10291.58 46797.56 00:20:19.328 [2024-10-17T14:48:33.018Z] =================================================================================================================== 00:20:19.328 [2024-10-17T14:48:33.018Z] Total : 3364.28 13.14 0.00 0.00 37968.40 10291.58 46797.56 00:20:19.328 { 00:20:19.328 "results": [ 00:20:19.328 { 00:20:19.328 "job": "TLSTESTn1", 00:20:19.328 "core_mask": "0x4", 00:20:19.328 "workload": "verify", 00:20:19.328 "status": "finished", 00:20:19.328 "verify_range": { 00:20:19.328 "start": 0, 00:20:19.328 "length": 8192 00:20:19.328 }, 00:20:19.328 "queue_depth": 128, 00:20:19.328 "io_size": 4096, 00:20:19.328 "runtime": 10.031556, 00:20:19.328 "iops": 3364.2836664621123, 00:20:19.328 "mibps": 13.141733072117626, 00:20:19.328 "io_failed": 0, 00:20:19.328 "io_timeout": 0, 00:20:19.328 "avg_latency_us": 37968.399906323706, 00:20:19.328 "min_latency_us": 10291.579259259259, 00:20:19.328 "max_latency_us": 46797.55851851852 00:20:19.328 } 00:20:19.328 ], 00:20:19.328 "core_count": 1 00:20:19.328 } 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:19.328 nvmf_trace.0 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2382675 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2382675 ']' 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2382675 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2382675 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2382675' 00:20:19.328 killing process with pid 2382675 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2382675 00:20:19.328 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.328 00:20:19.328 Latency(us) 00:20:19.328 [2024-10-17T14:48:33.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.328 [2024-10-17T14:48:33.018Z] =================================================================================================================== 00:20:19.328 [2024-10-17T14:48:33.018Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.328 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2382675 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:19.587 rmmod nvme_tcp 00:20:19.587 rmmod nvme_fabrics 00:20:19.587 rmmod nvme_keyring 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 2382640 ']' 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 2382640 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2382640 ']' 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2382640 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2382640 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2382640' 00:20:19.587 killing process with pid 2382640 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2382640 00:20:19.587 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2382640 00:20:19.846 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:19.846 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:19.846 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:19.846 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:19.846 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:19.846 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:19.846 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:19.846 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:19.846 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:19.846 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.846 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.846 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.811 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:21.811 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.uuq 00:20:21.811 00:20:21.811 real 0m17.114s 00:20:21.811 user 0m22.454s 00:20:21.811 sys 0m5.515s 00:20:21.811 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:21.811 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.811 ************************************ 00:20:21.811 END TEST nvmf_fips 00:20:21.811 ************************************ 00:20:21.811 16:48:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:21.811 16:48:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:21.811 16:48:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:21.811 16:48:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:21.811 ************************************ 00:20:21.811 START TEST nvmf_control_msg_list 00:20:21.811 ************************************ 00:20:21.811 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:21.811 * Looking for test storage... 00:20:21.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:21.811 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:21.811 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:21.811 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:22.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.071 --rc genhtml_branch_coverage=1 00:20:22.071 --rc genhtml_function_coverage=1 00:20:22.071 --rc genhtml_legend=1 00:20:22.071 --rc geninfo_all_blocks=1 00:20:22.071 --rc geninfo_unexecuted_blocks=1 00:20:22.071 00:20:22.071 ' 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:22.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.071 --rc genhtml_branch_coverage=1 00:20:22.071 --rc genhtml_function_coverage=1 00:20:22.071 --rc genhtml_legend=1 00:20:22.071 --rc geninfo_all_blocks=1 00:20:22.071 --rc geninfo_unexecuted_blocks=1 00:20:22.071 00:20:22.071 ' 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:22.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.071 --rc genhtml_branch_coverage=1 00:20:22.071 --rc genhtml_function_coverage=1 00:20:22.071 --rc genhtml_legend=1 00:20:22.071 --rc geninfo_all_blocks=1 00:20:22.071 --rc geninfo_unexecuted_blocks=1 00:20:22.071 00:20:22.071 ' 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:22.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.071 --rc genhtml_branch_coverage=1 00:20:22.071 --rc genhtml_function_coverage=1 00:20:22.071 --rc genhtml_legend=1 00:20:22.071 --rc geninfo_all_blocks=1 00:20:22.071 --rc geninfo_unexecuted_blocks=1 00:20:22.071 00:20:22.071 ' 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.071 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:22.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:22.072 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:24.622 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:24.623 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:24.623 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:24.623 Found net devices under 0000:09:00.0: cvl_0_0 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:24.623 Found net devices under 0000:09:00.1: cvl_0_1 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:24.623 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:24.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:20:24.624 00:20:24.624 --- 10.0.0.2 ping statistics --- 00:20:24.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.624 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:20:24.624 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:20:24.624 00:20:24.624 --- 10.0.0.1 ping statistics --- 00:20:24.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.625 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=2386055 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 2386055 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 2386055 ']' 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.625 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:24.625 [2024-10-17 16:48:38.008546] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:20:24.625 [2024-10-17 16:48:38.008628] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.625 [2024-10-17 16:48:38.072490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.625 [2024-10-17 16:48:38.131828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.625 [2024-10-17 16:48:38.131889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.626 [2024-10-17 16:48:38.131913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.626 [2024-10-17 16:48:38.131924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.626 [2024-10-17 16:48:38.131933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.626 [2024-10-17 16:48:38.132448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:24.626 [2024-10-17 16:48:38.270914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:24.626 Malloc0 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.626 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:24.626 [2024-10-17 16:48:38.310470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.886 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.886 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2386089 00:20:24.886 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:24.886 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2386090 00:20:24.886 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:24.886 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2386091 00:20:24.886 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:24.886 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2386089 00:20:24.886 [2024-10-17 16:48:38.379354] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:24.886 [2024-10-17 16:48:38.379632] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:24.886 [2024-10-17 16:48:38.379892] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:25.820 Initializing NVMe Controllers 00:20:25.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:25.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:25.820 Initialization complete. Launching workers. 00:20:25.820 ======================================================== 00:20:25.820 Latency(us) 00:20:25.820 Device Information : IOPS MiB/s Average min max 00:20:25.820 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2292.00 8.95 435.93 164.65 41232.90 00:20:25.820 ======================================================== 00:20:25.820 Total : 2292.00 8.95 435.93 164.65 41232.90 00:20:25.820 00:20:25.820 Initializing NVMe Controllers 00:20:25.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:25.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:25.820 Initialization complete. Launching workers. 00:20:25.820 ======================================================== 00:20:25.820 Latency(us) 00:20:25.820 Device Information : IOPS MiB/s Average min max 00:20:25.820 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 58.00 0.23 17492.96 196.21 41930.61 00:20:25.820 ======================================================== 00:20:25.820 Total : 58.00 0.23 17492.96 196.21 41930.61 00:20:25.820 00:20:25.820 [2024-10-17 16:48:39.496878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66bd50 is same with the state(6) to be set 00:20:26.079 Initializing NVMe Controllers 00:20:26.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:26.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:26.079 Initialization complete. Launching workers. 00:20:26.079 ======================================================== 00:20:26.079 Latency(us) 00:20:26.079 Device Information : IOPS MiB/s Average min max 00:20:26.079 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40893.71 40687.47 40978.49 00:20:26.079 ======================================================== 00:20:26.079 Total : 25.00 0.10 40893.71 40687.47 40978.49 00:20:26.079 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2386090 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2386091 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.079 rmmod nvme_tcp 00:20:26.079 rmmod nvme_fabrics 00:20:26.079 rmmod nvme_keyring 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 2386055 ']' 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 2386055 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 2386055 ']' 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 2386055 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2386055 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2386055' 00:20:26.079 killing process with pid 2386055 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 2386055 00:20:26.079 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 2386055 00:20:26.338 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:26.338 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:26.338 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:26.338 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:26.338 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:20:26.338 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:26.338 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:20:26.338 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.338 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:26.338 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.338 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.338 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.867 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:28.867 00:20:28.867 real 0m6.520s 00:20:28.867 user 0m5.805s 00:20:28.867 sys 0m2.620s 00:20:28.867 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:28.867 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:28.867 ************************************ 00:20:28.867 END TEST nvmf_control_msg_list 00:20:28.867 ************************************ 00:20:28.867 16:48:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:28.867 16:48:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:28.867 16:48:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:28.867 16:48:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.867 ************************************ 00:20:28.867 START TEST nvmf_wait_for_buf 00:20:28.867 ************************************ 00:20:28.867 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:28.867 * Looking for test storage... 00:20:28.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:28.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.867 --rc genhtml_branch_coverage=1 00:20:28.867 --rc genhtml_function_coverage=1 00:20:28.867 --rc genhtml_legend=1 00:20:28.867 --rc geninfo_all_blocks=1 00:20:28.867 --rc geninfo_unexecuted_blocks=1 00:20:28.867 00:20:28.867 ' 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:28.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.867 --rc genhtml_branch_coverage=1 00:20:28.867 --rc genhtml_function_coverage=1 00:20:28.867 --rc genhtml_legend=1 00:20:28.867 --rc geninfo_all_blocks=1 00:20:28.867 --rc geninfo_unexecuted_blocks=1 00:20:28.867 00:20:28.867 ' 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:28.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.867 --rc genhtml_branch_coverage=1 00:20:28.867 --rc genhtml_function_coverage=1 00:20:28.867 --rc genhtml_legend=1 00:20:28.867 --rc geninfo_all_blocks=1 00:20:28.867 --rc geninfo_unexecuted_blocks=1 00:20:28.867 00:20:28.867 ' 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:28.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.867 --rc genhtml_branch_coverage=1 00:20:28.867 --rc genhtml_function_coverage=1 00:20:28.867 --rc genhtml_legend=1 00:20:28.867 --rc geninfo_all_blocks=1 00:20:28.867 --rc geninfo_unexecuted_blocks=1 00:20:28.867 00:20:28.867 ' 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.867 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:28.868 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:30.769 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:30.769 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.769 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:30.770 Found net devices under 0000:09:00.0: cvl_0_0 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:30.770 Found net devices under 0000:09:00.1: cvl_0_1 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:30.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:20:30.770 00:20:30.770 --- 10.0.0.2 ping statistics --- 00:20:30.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.770 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:20:30.770 00:20:30.770 --- 10.0.0.1 ping statistics --- 00:20:30.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.770 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=2388162 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 2388162 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 2388162 ']' 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:30.770 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.770 [2024-10-17 16:48:44.422552] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:20:30.770 [2024-10-17 16:48:44.422634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.029 [2024-10-17 16:48:44.486764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.029 [2024-10-17 16:48:44.545251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.029 [2024-10-17 16:48:44.545323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.029 [2024-10-17 16:48:44.545338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.029 [2024-10-17 16:48:44.545349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.029 [2024-10-17 16:48:44.545358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.029 [2024-10-17 16:48:44.545943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.029 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:31.029 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:20:31.029 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:31.029 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:31.029 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.029 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.029 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:31.029 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:31.029 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:31.030 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.030 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.030 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.030 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:31.030 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.030 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.030 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.030 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:31.030 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.030 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.288 Malloc0 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.288 [2024-10-17 16:48:44.794782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.288 [2024-10-17 16:48:44.819022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.288 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:31.288 [2024-10-17 16:48:44.889101] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:32.662 Initializing NVMe Controllers 00:20:32.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:32.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:32.662 Initialization complete. Launching workers. 00:20:32.662 ======================================================== 00:20:32.662 Latency(us) 00:20:32.662 Device Information : IOPS MiB/s Average min max 00:20:32.662 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33561.20 7990.57 71811.44 00:20:32.662 ======================================================== 00:20:32.662 Total : 124.00 15.50 33561.20 7990.57 71811.44 00:20:32.662 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:32.920 rmmod nvme_tcp 00:20:32.920 rmmod nvme_fabrics 00:20:32.920 rmmod nvme_keyring 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 2388162 ']' 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 2388162 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 2388162 ']' 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 2388162 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2388162 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2388162' 00:20:32.920 killing process with pid 2388162 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 2388162 00:20:32.920 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 2388162 00:20:33.179 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:33.179 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:33.179 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:33.179 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:33.179 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:20:33.179 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:33.179 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:20:33.179 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.179 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:33.179 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.179 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.179 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.083 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:35.083 00:20:35.083 real 0m6.755s 00:20:35.083 user 0m3.222s 00:20:35.083 sys 0m2.004s 00:20:35.083 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:35.083 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.083 ************************************ 00:20:35.083 END TEST nvmf_wait_for_buf 00:20:35.083 ************************************ 00:20:35.342 16:48:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:35.342 16:48:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:35.342 16:48:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:35.342 16:48:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:35.342 16:48:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:35.342 16:48:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:37.244 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:37.244 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:37.244 Found net devices under 0000:09:00.0: cvl_0_0 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.244 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:37.245 Found net devices under 0000:09:00.1: cvl_0_1 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:37.245 ************************************ 00:20:37.245 START TEST nvmf_perf_adq 00:20:37.245 ************************************ 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:37.245 * Looking for test storage... 00:20:37.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:37.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.245 --rc genhtml_branch_coverage=1 00:20:37.245 --rc genhtml_function_coverage=1 00:20:37.245 --rc genhtml_legend=1 00:20:37.245 --rc geninfo_all_blocks=1 00:20:37.245 --rc geninfo_unexecuted_blocks=1 00:20:37.245 00:20:37.245 ' 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:37.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.245 --rc genhtml_branch_coverage=1 00:20:37.245 --rc genhtml_function_coverage=1 00:20:37.245 --rc genhtml_legend=1 00:20:37.245 --rc geninfo_all_blocks=1 00:20:37.245 --rc geninfo_unexecuted_blocks=1 00:20:37.245 00:20:37.245 ' 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:37.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.245 --rc genhtml_branch_coverage=1 00:20:37.245 --rc genhtml_function_coverage=1 00:20:37.245 --rc genhtml_legend=1 00:20:37.245 --rc geninfo_all_blocks=1 00:20:37.245 --rc geninfo_unexecuted_blocks=1 00:20:37.245 00:20:37.245 ' 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:37.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.245 --rc genhtml_branch_coverage=1 00:20:37.245 --rc genhtml_function_coverage=1 00:20:37.245 --rc genhtml_legend=1 00:20:37.245 --rc geninfo_all_blocks=1 00:20:37.245 --rc geninfo_unexecuted_blocks=1 00:20:37.245 00:20:37.245 ' 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:37.245 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.246 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.246 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.246 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:37.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:37.246 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:37.246 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:37.246 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:37.503 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:37.503 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:37.503 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.403 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.403 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:39.403 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:39.403 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:39.403 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:39.403 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:39.403 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:39.403 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:39.404 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:39.404 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:39.404 Found net devices under 0000:09:00.0: cvl_0_0 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:39.404 Found net devices under 0000:09:00.1: cvl_0_1 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:39.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:39.404 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:39.970 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:41.873 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:47.142 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:47.143 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:47.143 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:47.143 Found net devices under 0000:09:00.0: cvl_0_0 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:47.143 Found net devices under 0000:09:00.1: cvl_0_1 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:47.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:20:47.143 00:20:47.143 --- 10.0.0.2 ping statistics --- 00:20:47.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.143 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:20:47.143 00:20:47.143 --- 10.0.0.1 ping statistics --- 00:20:47.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.143 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=2392910 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 2392910 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2392910 ']' 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.143 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.144 [2024-10-17 16:49:00.746650] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:20:47.144 [2024-10-17 16:49:00.746721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.144 [2024-10-17 16:49:00.815542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.402 [2024-10-17 16:49:00.881273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.402 [2024-10-17 16:49:00.881332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.402 [2024-10-17 16:49:00.881356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.402 [2024-10-17 16:49:00.881369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.402 [2024-10-17 16:49:00.881381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.402 [2024-10-17 16:49:00.883014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.402 [2024-10-17 16:49:00.883055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.402 [2024-10-17 16:49:00.883170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.402 [2024-10-17 16:49:00.883173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.402 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:47.402 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:47.402 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:47.402 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:47.402 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.402 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.402 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:47.402 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:47.402 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.402 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:47.402 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.402 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.403 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:47.403 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:47.403 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.403 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.403 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.403 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:47.403 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.403 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.763 [2024-10-17 16:49:01.144334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.763 Malloc1 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.763 [2024-10-17 16:49:01.210116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2393029 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:47.763 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:49.727 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:49.727 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.727 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.727 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.727 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:49.727 "tick_rate": 2700000000, 00:20:49.727 "poll_groups": [ 00:20:49.727 { 00:20:49.727 "name": "nvmf_tgt_poll_group_000", 00:20:49.727 "admin_qpairs": 1, 00:20:49.727 "io_qpairs": 1, 00:20:49.727 "current_admin_qpairs": 1, 00:20:49.727 "current_io_qpairs": 1, 00:20:49.727 "pending_bdev_io": 0, 00:20:49.727 "completed_nvme_io": 20279, 00:20:49.727 "transports": [ 00:20:49.727 { 00:20:49.727 "trtype": "TCP" 00:20:49.727 } 00:20:49.727 ] 00:20:49.727 }, 00:20:49.727 { 00:20:49.727 "name": "nvmf_tgt_poll_group_001", 00:20:49.727 "admin_qpairs": 0, 00:20:49.727 "io_qpairs": 1, 00:20:49.727 "current_admin_qpairs": 0, 00:20:49.727 "current_io_qpairs": 1, 00:20:49.727 "pending_bdev_io": 0, 00:20:49.727 "completed_nvme_io": 19507, 00:20:49.727 "transports": [ 00:20:49.727 { 00:20:49.727 "trtype": "TCP" 00:20:49.727 } 00:20:49.727 ] 00:20:49.727 }, 00:20:49.727 { 00:20:49.727 "name": "nvmf_tgt_poll_group_002", 00:20:49.727 "admin_qpairs": 0, 00:20:49.727 "io_qpairs": 1, 00:20:49.727 "current_admin_qpairs": 0, 00:20:49.727 "current_io_qpairs": 1, 00:20:49.727 "pending_bdev_io": 0, 00:20:49.727 "completed_nvme_io": 19767, 00:20:49.727 "transports": [ 00:20:49.727 { 00:20:49.727 "trtype": "TCP" 00:20:49.727 } 00:20:49.727 ] 00:20:49.727 }, 00:20:49.727 { 00:20:49.727 "name": "nvmf_tgt_poll_group_003", 00:20:49.727 "admin_qpairs": 0, 00:20:49.727 "io_qpairs": 1, 00:20:49.727 "current_admin_qpairs": 0, 00:20:49.727 "current_io_qpairs": 1, 00:20:49.727 "pending_bdev_io": 0, 00:20:49.727 "completed_nvme_io": 19881, 00:20:49.727 "transports": [ 00:20:49.727 { 00:20:49.727 "trtype": "TCP" 00:20:49.727 } 00:20:49.727 ] 00:20:49.727 } 00:20:49.727 ] 00:20:49.727 }' 00:20:49.727 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:49.727 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:49.727 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:49.727 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:49.727 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2393029 00:20:57.838 Initializing NVMe Controllers 00:20:57.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:57.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:57.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:57.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:57.838 Initialization complete. Launching workers. 00:20:57.838 ======================================================== 00:20:57.838 Latency(us) 00:20:57.838 Device Information : IOPS MiB/s Average min max 00:20:57.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10372.39 40.52 6172.47 2420.43 9181.54 00:20:57.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10201.49 39.85 6276.05 2620.53 10107.60 00:20:57.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10368.79 40.50 6171.87 2535.58 10225.99 00:20:57.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10611.99 41.45 6032.89 2440.13 9827.63 00:20:57.838 ======================================================== 00:20:57.838 Total : 41554.66 162.32 6162.10 2420.43 10225.99 00:20:57.838 00:20:57.838 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:57.838 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:57.838 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:57.838 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.839 rmmod nvme_tcp 00:20:57.839 rmmod nvme_fabrics 00:20:57.839 rmmod nvme_keyring 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 2392910 ']' 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 2392910 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2392910 ']' 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2392910 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2392910 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2392910' 00:20:57.839 killing process with pid 2392910 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2392910 00:20:57.839 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2392910 00:20:58.098 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:58.098 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:58.098 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:58.098 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:58.098 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:20:58.098 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:58.098 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:20:58.098 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:58.098 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:58.098 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.098 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.098 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.631 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:00.631 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:00.631 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:00.631 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:00.890 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:02.793 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:08.062 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:08.062 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:08.062 Found net devices under 0000:09:00.0: cvl_0_0 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:08.062 Found net devices under 0000:09:00.1: cvl_0_1 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.062 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:08.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:21:08.063 00:21:08.063 --- 10.0.0.2 ping statistics --- 00:21:08.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.063 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:21:08.063 00:21:08.063 --- 10.0.0.1 ping statistics --- 00:21:08.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.063 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:08.063 net.core.busy_poll = 1 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:08.063 net.core.busy_read = 1 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=2395660 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 2395660 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2395660 ']' 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:08.063 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.063 [2024-10-17 16:49:21.593089] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:21:08.063 [2024-10-17 16:49:21.593187] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.063 [2024-10-17 16:49:21.660615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.063 [2024-10-17 16:49:21.723800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.063 [2024-10-17 16:49:21.723857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.063 [2024-10-17 16:49:21.723884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.063 [2024-10-17 16:49:21.723898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.063 [2024-10-17 16:49:21.723917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.063 [2024-10-17 16:49:21.725606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.063 [2024-10-17 16:49:21.725657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.063 [2024-10-17 16:49:21.725781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.063 [2024-10-17 16:49:21.725784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.322 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.322 [2024-10-17 16:49:22.002357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.322 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.322 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:08.322 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.322 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.580 Malloc1 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.580 [2024-10-17 16:49:22.064622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2395697 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:08.580 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:10.479 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:10.479 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.479 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.479 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.479 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:10.479 "tick_rate": 2700000000, 00:21:10.479 "poll_groups": [ 00:21:10.479 { 00:21:10.479 "name": "nvmf_tgt_poll_group_000", 00:21:10.479 "admin_qpairs": 1, 00:21:10.479 "io_qpairs": 1, 00:21:10.479 "current_admin_qpairs": 1, 00:21:10.479 "current_io_qpairs": 1, 00:21:10.479 "pending_bdev_io": 0, 00:21:10.479 "completed_nvme_io": 25547, 00:21:10.479 "transports": [ 00:21:10.479 { 00:21:10.479 "trtype": "TCP" 00:21:10.479 } 00:21:10.479 ] 00:21:10.479 }, 00:21:10.479 { 00:21:10.479 "name": "nvmf_tgt_poll_group_001", 00:21:10.479 "admin_qpairs": 0, 00:21:10.479 "io_qpairs": 3, 00:21:10.479 "current_admin_qpairs": 0, 00:21:10.479 "current_io_qpairs": 3, 00:21:10.479 "pending_bdev_io": 0, 00:21:10.479 "completed_nvme_io": 26108, 00:21:10.479 "transports": [ 00:21:10.479 { 00:21:10.479 "trtype": "TCP" 00:21:10.479 } 00:21:10.479 ] 00:21:10.479 }, 00:21:10.479 { 00:21:10.479 "name": "nvmf_tgt_poll_group_002", 00:21:10.479 "admin_qpairs": 0, 00:21:10.479 "io_qpairs": 0, 00:21:10.479 "current_admin_qpairs": 0, 00:21:10.479 "current_io_qpairs": 0, 00:21:10.479 "pending_bdev_io": 0, 00:21:10.479 "completed_nvme_io": 0, 00:21:10.479 "transports": [ 00:21:10.479 { 00:21:10.479 "trtype": "TCP" 00:21:10.479 } 00:21:10.479 ] 00:21:10.479 }, 00:21:10.479 { 00:21:10.479 "name": "nvmf_tgt_poll_group_003", 00:21:10.479 "admin_qpairs": 0, 00:21:10.479 "io_qpairs": 0, 00:21:10.479 "current_admin_qpairs": 0, 00:21:10.479 "current_io_qpairs": 0, 00:21:10.479 "pending_bdev_io": 0, 00:21:10.479 "completed_nvme_io": 0, 00:21:10.479 "transports": [ 00:21:10.479 { 00:21:10.479 "trtype": "TCP" 00:21:10.479 } 00:21:10.479 ] 00:21:10.479 } 00:21:10.479 ] 00:21:10.479 }' 00:21:10.479 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:10.479 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:10.479 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:10.479 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:10.479 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2395697 00:21:18.589 Initializing NVMe Controllers 00:21:18.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:18.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:18.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:18.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:18.589 Initialization complete. Launching workers. 00:21:18.589 ======================================================== 00:21:18.589 Latency(us) 00:21:18.589 Device Information : IOPS MiB/s Average min max 00:21:18.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4533.00 17.71 14124.56 2232.42 62404.07 00:21:18.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4652.00 18.17 13762.92 2208.16 59597.49 00:21:18.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13594.60 53.10 4707.92 1822.51 7481.26 00:21:18.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4583.70 17.91 14010.25 1930.38 62555.45 00:21:18.589 ======================================================== 00:21:18.589 Total : 27363.30 106.89 9365.56 1822.51 62555.45 00:21:18.589 00:21:18.589 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:18.589 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:18.589 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:18.589 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:18.589 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:18.589 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:18.589 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:18.589 rmmod nvme_tcp 00:21:18.589 rmmod nvme_fabrics 00:21:18.848 rmmod nvme_keyring 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 2395660 ']' 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 2395660 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2395660 ']' 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2395660 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2395660 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2395660' 00:21:18.848 killing process with pid 2395660 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2395660 00:21:18.848 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2395660 00:21:19.107 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:19.107 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:19.107 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:19.107 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:19.107 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:19.107 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:19.107 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:19.107 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.107 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:19.107 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.107 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.107 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.010 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:21.010 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:21.010 00:21:21.010 real 0m43.853s 00:21:21.010 user 2m39.801s 00:21:21.010 sys 0m9.448s 00:21:21.010 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:21.010 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.010 ************************************ 00:21:21.010 END TEST nvmf_perf_adq 00:21:21.010 ************************************ 00:21:21.010 16:49:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:21.010 16:49:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:21.010 16:49:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:21.010 16:49:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:21.010 ************************************ 00:21:21.010 START TEST nvmf_shutdown 00:21:21.010 ************************************ 00:21:21.010 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:21.269 * Looking for test storage... 00:21:21.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.269 --rc genhtml_branch_coverage=1 00:21:21.269 --rc genhtml_function_coverage=1 00:21:21.269 --rc genhtml_legend=1 00:21:21.269 --rc geninfo_all_blocks=1 00:21:21.269 --rc geninfo_unexecuted_blocks=1 00:21:21.269 00:21:21.269 ' 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.269 --rc genhtml_branch_coverage=1 00:21:21.269 --rc genhtml_function_coverage=1 00:21:21.269 --rc genhtml_legend=1 00:21:21.269 --rc geninfo_all_blocks=1 00:21:21.269 --rc geninfo_unexecuted_blocks=1 00:21:21.269 00:21:21.269 ' 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.269 --rc genhtml_branch_coverage=1 00:21:21.269 --rc genhtml_function_coverage=1 00:21:21.269 --rc genhtml_legend=1 00:21:21.269 --rc geninfo_all_blocks=1 00:21:21.269 --rc geninfo_unexecuted_blocks=1 00:21:21.269 00:21:21.269 ' 00:21:21.269 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.269 --rc genhtml_branch_coverage=1 00:21:21.269 --rc genhtml_function_coverage=1 00:21:21.270 --rc genhtml_legend=1 00:21:21.270 --rc geninfo_all_blocks=1 00:21:21.270 --rc geninfo_unexecuted_blocks=1 00:21:21.270 00:21:21.270 ' 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:21.270 ************************************ 00:21:21.270 START TEST nvmf_shutdown_tc1 00:21:21.270 ************************************ 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.270 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.172 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:23.173 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:23.173 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:23.173 Found net devices under 0000:09:00.0: cvl_0_0 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:23.173 Found net devices under 0000:09:00.1: cvl_0_1 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.173 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:23.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:21:23.432 00:21:23.432 --- 10.0.0.2 ping statistics --- 00:21:23.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.432 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:21:23.432 00:21:23.432 --- 10.0.0.1 ping statistics --- 00:21:23.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.432 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=2398862 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 2398862 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2398862 ']' 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.432 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.432 [2024-10-17 16:49:37.009790] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:21:23.432 [2024-10-17 16:49:37.009893] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.432 [2024-10-17 16:49:37.078438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.696 [2024-10-17 16:49:37.143491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.696 [2024-10-17 16:49:37.143536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.696 [2024-10-17 16:49:37.143563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.696 [2024-10-17 16:49:37.143577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.696 [2024-10-17 16:49:37.143588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.696 [2024-10-17 16:49:37.145373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.696 [2024-10-17 16:49:37.145416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.696 [2024-10-17 16:49:37.145473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:23.696 [2024-10-17 16:49:37.145475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.696 [2024-10-17 16:49:37.291224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.696 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.696 Malloc1 00:21:23.954 [2024-10-17 16:49:37.391998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.954 Malloc2 00:21:23.954 Malloc3 00:21:23.954 Malloc4 00:21:23.954 Malloc5 00:21:23.954 Malloc6 00:21:24.213 Malloc7 00:21:24.213 Malloc8 00:21:24.213 Malloc9 00:21:24.213 Malloc10 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2399042 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2399042 /var/tmp/bdevperf.sock 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2399042 ']' 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:24.213 { 00:21:24.213 "params": { 00:21:24.213 "name": "Nvme$subsystem", 00:21:24.213 "trtype": "$TEST_TRANSPORT", 00:21:24.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.213 "adrfam": "ipv4", 00:21:24.213 "trsvcid": "$NVMF_PORT", 00:21:24.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.213 "hdgst": ${hdgst:-false}, 00:21:24.213 "ddgst": ${ddgst:-false} 00:21:24.213 }, 00:21:24.213 "method": "bdev_nvme_attach_controller" 00:21:24.213 } 00:21:24.213 EOF 00:21:24.213 )") 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:24.213 { 00:21:24.213 "params": { 00:21:24.213 "name": "Nvme$subsystem", 00:21:24.213 "trtype": "$TEST_TRANSPORT", 00:21:24.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.213 "adrfam": "ipv4", 00:21:24.213 "trsvcid": "$NVMF_PORT", 00:21:24.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.213 "hdgst": ${hdgst:-false}, 00:21:24.213 "ddgst": ${ddgst:-false} 00:21:24.213 }, 00:21:24.213 "method": "bdev_nvme_attach_controller" 00:21:24.213 } 00:21:24.213 EOF 00:21:24.213 )") 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:24.213 { 00:21:24.213 "params": { 00:21:24.213 "name": "Nvme$subsystem", 00:21:24.213 "trtype": "$TEST_TRANSPORT", 00:21:24.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.213 "adrfam": "ipv4", 00:21:24.213 "trsvcid": "$NVMF_PORT", 00:21:24.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.213 "hdgst": ${hdgst:-false}, 00:21:24.213 "ddgst": ${ddgst:-false} 00:21:24.213 }, 00:21:24.213 "method": "bdev_nvme_attach_controller" 00:21:24.213 } 00:21:24.213 EOF 00:21:24.213 )") 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:24.213 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:24.213 { 00:21:24.213 "params": { 00:21:24.213 "name": "Nvme$subsystem", 00:21:24.213 "trtype": "$TEST_TRANSPORT", 00:21:24.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.213 "adrfam": "ipv4", 00:21:24.213 "trsvcid": "$NVMF_PORT", 00:21:24.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.213 "hdgst": ${hdgst:-false}, 00:21:24.213 "ddgst": ${ddgst:-false} 00:21:24.213 }, 00:21:24.213 "method": "bdev_nvme_attach_controller" 00:21:24.213 } 00:21:24.213 EOF 00:21:24.213 )") 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:24.214 { 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme$subsystem", 00:21:24.214 "trtype": "$TEST_TRANSPORT", 00:21:24.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "$NVMF_PORT", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.214 "hdgst": ${hdgst:-false}, 00:21:24.214 "ddgst": ${ddgst:-false} 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 } 00:21:24.214 EOF 00:21:24.214 )") 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:24.214 { 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme$subsystem", 00:21:24.214 "trtype": "$TEST_TRANSPORT", 00:21:24.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "$NVMF_PORT", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.214 "hdgst": ${hdgst:-false}, 00:21:24.214 "ddgst": ${ddgst:-false} 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 } 00:21:24.214 EOF 00:21:24.214 )") 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:24.214 { 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme$subsystem", 00:21:24.214 "trtype": "$TEST_TRANSPORT", 00:21:24.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "$NVMF_PORT", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.214 "hdgst": ${hdgst:-false}, 00:21:24.214 "ddgst": ${ddgst:-false} 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 } 00:21:24.214 EOF 00:21:24.214 )") 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:24.214 { 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme$subsystem", 00:21:24.214 "trtype": "$TEST_TRANSPORT", 00:21:24.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "$NVMF_PORT", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.214 "hdgst": ${hdgst:-false}, 00:21:24.214 "ddgst": ${ddgst:-false} 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 } 00:21:24.214 EOF 00:21:24.214 )") 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:24.214 { 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme$subsystem", 00:21:24.214 "trtype": "$TEST_TRANSPORT", 00:21:24.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "$NVMF_PORT", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.214 "hdgst": ${hdgst:-false}, 00:21:24.214 "ddgst": ${ddgst:-false} 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 } 00:21:24.214 EOF 00:21:24.214 )") 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:24.214 { 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme$subsystem", 00:21:24.214 "trtype": "$TEST_TRANSPORT", 00:21:24.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "$NVMF_PORT", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.214 "hdgst": ${hdgst:-false}, 00:21:24.214 "ddgst": ${ddgst:-false} 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 } 00:21:24.214 EOF 00:21:24.214 )") 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:21:24.214 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme1", 00:21:24.214 "trtype": "tcp", 00:21:24.214 "traddr": "10.0.0.2", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "4420", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.214 "hdgst": false, 00:21:24.214 "ddgst": false 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 },{ 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme2", 00:21:24.214 "trtype": "tcp", 00:21:24.214 "traddr": "10.0.0.2", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "4420", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:24.214 "hdgst": false, 00:21:24.214 "ddgst": false 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 },{ 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme3", 00:21:24.214 "trtype": "tcp", 00:21:24.214 "traddr": "10.0.0.2", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "4420", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:24.214 "hdgst": false, 00:21:24.214 "ddgst": false 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 },{ 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme4", 00:21:24.214 "trtype": "tcp", 00:21:24.214 "traddr": "10.0.0.2", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "4420", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:24.214 "hdgst": false, 00:21:24.214 "ddgst": false 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 },{ 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme5", 00:21:24.214 "trtype": "tcp", 00:21:24.214 "traddr": "10.0.0.2", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "4420", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:24.214 "hdgst": false, 00:21:24.214 "ddgst": false 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 },{ 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme6", 00:21:24.214 "trtype": "tcp", 00:21:24.214 "traddr": "10.0.0.2", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "4420", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:24.214 "hdgst": false, 00:21:24.214 "ddgst": false 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 },{ 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme7", 00:21:24.214 "trtype": "tcp", 00:21:24.214 "traddr": "10.0.0.2", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "4420", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:24.214 "hdgst": false, 00:21:24.214 "ddgst": false 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 },{ 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme8", 00:21:24.214 "trtype": "tcp", 00:21:24.214 "traddr": "10.0.0.2", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "4420", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:24.214 "hdgst": false, 00:21:24.214 "ddgst": false 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 },{ 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme9", 00:21:24.214 "trtype": "tcp", 00:21:24.214 "traddr": "10.0.0.2", 00:21:24.214 "adrfam": "ipv4", 00:21:24.214 "trsvcid": "4420", 00:21:24.214 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:24.214 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:24.214 "hdgst": false, 00:21:24.214 "ddgst": false 00:21:24.214 }, 00:21:24.214 "method": "bdev_nvme_attach_controller" 00:21:24.214 },{ 00:21:24.214 "params": { 00:21:24.214 "name": "Nvme10", 00:21:24.214 "trtype": "tcp", 00:21:24.214 "traddr": "10.0.0.2", 00:21:24.215 "adrfam": "ipv4", 00:21:24.215 "trsvcid": "4420", 00:21:24.215 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:24.215 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:24.215 "hdgst": false, 00:21:24.215 "ddgst": false 00:21:24.215 }, 00:21:24.215 "method": "bdev_nvme_attach_controller" 00:21:24.215 }' 00:21:24.215 [2024-10-17 16:49:37.901315] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:21:24.215 [2024-10-17 16:49:37.901406] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:24.473 [2024-10-17 16:49:37.964295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.473 [2024-10-17 16:49:38.024085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.371 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:26.371 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:26.371 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:26.371 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.371 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.371 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.371 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2399042 00:21:26.371 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:26.371 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:27.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2399042 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:27.310 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2398862 00:21:27.310 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:27.310 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:27.310 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:21:27.310 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:21:27.310 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:27.310 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:27.310 { 00:21:27.310 "params": { 00:21:27.310 "name": "Nvme$subsystem", 00:21:27.310 "trtype": "$TEST_TRANSPORT", 00:21:27.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.310 "adrfam": "ipv4", 00:21:27.310 "trsvcid": "$NVMF_PORT", 00:21:27.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.310 "hdgst": ${hdgst:-false}, 00:21:27.310 "ddgst": ${ddgst:-false} 00:21:27.310 }, 00:21:27.310 "method": "bdev_nvme_attach_controller" 00:21:27.310 } 00:21:27.310 EOF 00:21:27.310 )") 00:21:27.310 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:27.310 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:27.310 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:27.310 { 00:21:27.311 "params": { 00:21:27.311 "name": "Nvme$subsystem", 00:21:27.311 "trtype": "$TEST_TRANSPORT", 00:21:27.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.311 "adrfam": "ipv4", 00:21:27.311 "trsvcid": "$NVMF_PORT", 00:21:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.311 "hdgst": ${hdgst:-false}, 00:21:27.311 "ddgst": ${ddgst:-false} 00:21:27.311 }, 00:21:27.311 "method": "bdev_nvme_attach_controller" 00:21:27.311 } 00:21:27.311 EOF 00:21:27.311 )") 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:27.311 { 00:21:27.311 "params": { 00:21:27.311 "name": "Nvme$subsystem", 00:21:27.311 "trtype": "$TEST_TRANSPORT", 00:21:27.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.311 "adrfam": "ipv4", 00:21:27.311 "trsvcid": "$NVMF_PORT", 00:21:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.311 "hdgst": ${hdgst:-false}, 00:21:27.311 "ddgst": ${ddgst:-false} 00:21:27.311 }, 00:21:27.311 "method": "bdev_nvme_attach_controller" 00:21:27.311 } 00:21:27.311 EOF 00:21:27.311 )") 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:27.311 { 00:21:27.311 "params": { 00:21:27.311 "name": "Nvme$subsystem", 00:21:27.311 "trtype": "$TEST_TRANSPORT", 00:21:27.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.311 "adrfam": "ipv4", 00:21:27.311 "trsvcid": "$NVMF_PORT", 00:21:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.311 "hdgst": ${hdgst:-false}, 00:21:27.311 "ddgst": ${ddgst:-false} 00:21:27.311 }, 00:21:27.311 "method": "bdev_nvme_attach_controller" 00:21:27.311 } 00:21:27.311 EOF 00:21:27.311 )") 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:27.311 { 00:21:27.311 "params": { 00:21:27.311 "name": "Nvme$subsystem", 00:21:27.311 "trtype": "$TEST_TRANSPORT", 00:21:27.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.311 "adrfam": "ipv4", 00:21:27.311 "trsvcid": "$NVMF_PORT", 00:21:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.311 "hdgst": ${hdgst:-false}, 00:21:27.311 "ddgst": ${ddgst:-false} 00:21:27.311 }, 00:21:27.311 "method": "bdev_nvme_attach_controller" 00:21:27.311 } 00:21:27.311 EOF 00:21:27.311 )") 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:27.311 { 00:21:27.311 "params": { 00:21:27.311 "name": "Nvme$subsystem", 00:21:27.311 "trtype": "$TEST_TRANSPORT", 00:21:27.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.311 "adrfam": "ipv4", 00:21:27.311 "trsvcid": "$NVMF_PORT", 00:21:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.311 "hdgst": ${hdgst:-false}, 00:21:27.311 "ddgst": ${ddgst:-false} 00:21:27.311 }, 00:21:27.311 "method": "bdev_nvme_attach_controller" 00:21:27.311 } 00:21:27.311 EOF 00:21:27.311 )") 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:27.311 { 00:21:27.311 "params": { 00:21:27.311 "name": "Nvme$subsystem", 00:21:27.311 "trtype": "$TEST_TRANSPORT", 00:21:27.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.311 "adrfam": "ipv4", 00:21:27.311 "trsvcid": "$NVMF_PORT", 00:21:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.311 "hdgst": ${hdgst:-false}, 00:21:27.311 "ddgst": ${ddgst:-false} 00:21:27.311 }, 00:21:27.311 "method": "bdev_nvme_attach_controller" 00:21:27.311 } 00:21:27.311 EOF 00:21:27.311 )") 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:27.311 { 00:21:27.311 "params": { 00:21:27.311 "name": "Nvme$subsystem", 00:21:27.311 "trtype": "$TEST_TRANSPORT", 00:21:27.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.311 "adrfam": "ipv4", 00:21:27.311 "trsvcid": "$NVMF_PORT", 00:21:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.311 "hdgst": ${hdgst:-false}, 00:21:27.311 "ddgst": ${ddgst:-false} 00:21:27.311 }, 00:21:27.311 "method": "bdev_nvme_attach_controller" 00:21:27.311 } 00:21:27.311 EOF 00:21:27.311 )") 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:27.311 { 00:21:27.311 "params": { 00:21:27.311 "name": "Nvme$subsystem", 00:21:27.311 "trtype": "$TEST_TRANSPORT", 00:21:27.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.311 "adrfam": "ipv4", 00:21:27.311 "trsvcid": "$NVMF_PORT", 00:21:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.311 "hdgst": ${hdgst:-false}, 00:21:27.311 "ddgst": ${ddgst:-false} 00:21:27.311 }, 00:21:27.311 "method": "bdev_nvme_attach_controller" 00:21:27.311 } 00:21:27.311 EOF 00:21:27.311 )") 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:27.311 { 00:21:27.311 "params": { 00:21:27.311 "name": "Nvme$subsystem", 00:21:27.311 "trtype": "$TEST_TRANSPORT", 00:21:27.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.311 "adrfam": "ipv4", 00:21:27.311 "trsvcid": "$NVMF_PORT", 00:21:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.311 "hdgst": ${hdgst:-false}, 00:21:27.311 "ddgst": ${ddgst:-false} 00:21:27.311 }, 00:21:27.311 "method": "bdev_nvme_attach_controller" 00:21:27.311 } 00:21:27.311 EOF 00:21:27.311 )") 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:21:27.311 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:27.311 "params": { 00:21:27.311 "name": "Nvme1", 00:21:27.311 "trtype": "tcp", 00:21:27.311 "traddr": "10.0.0.2", 00:21:27.311 "adrfam": "ipv4", 00:21:27.311 "trsvcid": "4420", 00:21:27.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.312 "hdgst": false, 00:21:27.312 "ddgst": false 00:21:27.312 }, 00:21:27.312 "method": "bdev_nvme_attach_controller" 00:21:27.312 },{ 00:21:27.312 "params": { 00:21:27.312 "name": "Nvme2", 00:21:27.312 "trtype": "tcp", 00:21:27.312 "traddr": "10.0.0.2", 00:21:27.312 "adrfam": "ipv4", 00:21:27.312 "trsvcid": "4420", 00:21:27.312 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:27.312 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:27.312 "hdgst": false, 00:21:27.312 "ddgst": false 00:21:27.312 }, 00:21:27.312 "method": "bdev_nvme_attach_controller" 00:21:27.312 },{ 00:21:27.312 "params": { 00:21:27.312 "name": "Nvme3", 00:21:27.312 "trtype": "tcp", 00:21:27.312 "traddr": "10.0.0.2", 00:21:27.312 "adrfam": "ipv4", 00:21:27.312 "trsvcid": "4420", 00:21:27.312 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:27.312 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:27.312 "hdgst": false, 00:21:27.312 "ddgst": false 00:21:27.312 }, 00:21:27.312 "method": "bdev_nvme_attach_controller" 00:21:27.312 },{ 00:21:27.312 "params": { 00:21:27.312 "name": "Nvme4", 00:21:27.312 "trtype": "tcp", 00:21:27.312 "traddr": "10.0.0.2", 00:21:27.312 "adrfam": "ipv4", 00:21:27.312 "trsvcid": "4420", 00:21:27.312 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:27.312 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:27.312 "hdgst": false, 00:21:27.312 "ddgst": false 00:21:27.312 }, 00:21:27.312 "method": "bdev_nvme_attach_controller" 00:21:27.312 },{ 00:21:27.312 "params": { 00:21:27.312 "name": "Nvme5", 00:21:27.312 "trtype": "tcp", 00:21:27.312 "traddr": "10.0.0.2", 00:21:27.312 "adrfam": "ipv4", 00:21:27.312 "trsvcid": "4420", 00:21:27.312 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:27.312 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:27.312 "hdgst": false, 00:21:27.312 "ddgst": false 00:21:27.312 }, 00:21:27.312 "method": "bdev_nvme_attach_controller" 00:21:27.312 },{ 00:21:27.312 "params": { 00:21:27.312 "name": "Nvme6", 00:21:27.312 "trtype": "tcp", 00:21:27.312 "traddr": "10.0.0.2", 00:21:27.312 "adrfam": "ipv4", 00:21:27.312 "trsvcid": "4420", 00:21:27.312 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:27.312 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:27.312 "hdgst": false, 00:21:27.312 "ddgst": false 00:21:27.312 }, 00:21:27.312 "method": "bdev_nvme_attach_controller" 00:21:27.312 },{ 00:21:27.312 "params": { 00:21:27.312 "name": "Nvme7", 00:21:27.312 "trtype": "tcp", 00:21:27.312 "traddr": "10.0.0.2", 00:21:27.312 "adrfam": "ipv4", 00:21:27.312 "trsvcid": "4420", 00:21:27.312 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:27.312 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:27.312 "hdgst": false, 00:21:27.312 "ddgst": false 00:21:27.312 }, 00:21:27.312 "method": "bdev_nvme_attach_controller" 00:21:27.312 },{ 00:21:27.312 "params": { 00:21:27.312 "name": "Nvme8", 00:21:27.312 "trtype": "tcp", 00:21:27.312 "traddr": "10.0.0.2", 00:21:27.312 "adrfam": "ipv4", 00:21:27.312 "trsvcid": "4420", 00:21:27.312 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:27.312 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:27.312 "hdgst": false, 00:21:27.312 "ddgst": false 00:21:27.312 }, 00:21:27.312 "method": "bdev_nvme_attach_controller" 00:21:27.312 },{ 00:21:27.312 "params": { 00:21:27.312 "name": "Nvme9", 00:21:27.312 "trtype": "tcp", 00:21:27.312 "traddr": "10.0.0.2", 00:21:27.312 "adrfam": "ipv4", 00:21:27.312 "trsvcid": "4420", 00:21:27.312 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:27.312 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:27.312 "hdgst": false, 00:21:27.312 "ddgst": false 00:21:27.312 }, 00:21:27.312 "method": "bdev_nvme_attach_controller" 00:21:27.312 },{ 00:21:27.312 "params": { 00:21:27.312 "name": "Nvme10", 00:21:27.312 "trtype": "tcp", 00:21:27.312 "traddr": "10.0.0.2", 00:21:27.312 "adrfam": "ipv4", 00:21:27.312 "trsvcid": "4420", 00:21:27.312 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:27.312 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:27.312 "hdgst": false, 00:21:27.312 "ddgst": false 00:21:27.312 }, 00:21:27.312 "method": "bdev_nvme_attach_controller" 00:21:27.312 }' 00:21:27.312 [2024-10-17 16:49:40.964915] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:21:27.312 [2024-10-17 16:49:40.965027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399456 ] 00:21:27.571 [2024-10-17 16:49:41.029150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.571 [2024-10-17 16:49:41.088716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.945 Running I/O for 1 seconds... 00:21:30.139 1806.00 IOPS, 112.88 MiB/s 00:21:30.139 Latency(us) 00:21:30.139 [2024-10-17T14:49:43.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.139 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.139 Verification LBA range: start 0x0 length 0x400 00:21:30.139 Nvme1n1 : 1.10 241.40 15.09 0.00 0.00 256933.24 20777.34 240784.12 00:21:30.139 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.139 Verification LBA range: start 0x0 length 0x400 00:21:30.139 Nvme2n1 : 1.09 234.79 14.67 0.00 0.00 265265.11 19612.25 265639.25 00:21:30.139 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.139 Verification LBA range: start 0x0 length 0x400 00:21:30.139 Nvme3n1 : 1.10 231.69 14.48 0.00 0.00 264218.93 17087.91 260978.92 00:21:30.139 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.139 Verification LBA range: start 0x0 length 0x400 00:21:30.139 Nvme4n1 : 1.09 234.06 14.63 0.00 0.00 256383.81 31651.46 240784.12 00:21:30.139 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.139 Verification LBA range: start 0x0 length 0x400 00:21:30.139 Nvme5n1 : 1.15 222.72 13.92 0.00 0.00 266302.01 19806.44 267192.70 00:21:30.139 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.139 Verification LBA range: start 0x0 length 0x400 00:21:30.140 Nvme6n1 : 1.11 230.26 14.39 0.00 0.00 252353.42 25243.50 237677.23 00:21:30.140 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.140 Verification LBA range: start 0x0 length 0x400 00:21:30.140 Nvme7n1 : 1.14 227.50 14.22 0.00 0.00 250985.24 4320.52 268746.15 00:21:30.140 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.140 Verification LBA range: start 0x0 length 0x400 00:21:30.140 Nvme8n1 : 1.19 269.82 16.86 0.00 0.00 209483.55 14369.37 253211.69 00:21:30.140 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.140 Verification LBA range: start 0x0 length 0x400 00:21:30.140 Nvme9n1 : 1.18 217.53 13.60 0.00 0.00 254670.51 20097.71 282727.16 00:21:30.140 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.140 Verification LBA range: start 0x0 length 0x400 00:21:30.140 Nvme10n1 : 1.19 268.29 16.77 0.00 0.00 203758.90 5048.70 262532.36 00:21:30.140 [2024-10-17T14:49:43.830Z] =================================================================================================================== 00:21:30.140 [2024-10-17T14:49:43.830Z] Total : 2378.07 148.63 0.00 0.00 246110.88 4320.52 282727.16 00:21:30.398 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:30.398 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:30.398 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:30.398 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:30.398 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:30.398 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:30.398 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:30.398 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.398 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:30.398 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.398 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.398 rmmod nvme_tcp 00:21:30.398 rmmod nvme_fabrics 00:21:30.398 rmmod nvme_keyring 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 2398862 ']' 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 2398862 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2398862 ']' 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2398862 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2398862 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2398862' 00:21:30.398 killing process with pid 2398862 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2398862 00:21:30.398 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2398862 00:21:30.966 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:30.966 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:30.966 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:30.966 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:30.966 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:21:30.966 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:30.966 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:21:30.966 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.966 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:30.966 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.966 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.966 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.499 00:21:33.499 real 0m11.758s 00:21:33.499 user 0m34.574s 00:21:33.499 sys 0m3.055s 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:33.499 ************************************ 00:21:33.499 END TEST nvmf_shutdown_tc1 00:21:33.499 ************************************ 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:33.499 ************************************ 00:21:33.499 START TEST nvmf_shutdown_tc2 00:21:33.499 ************************************ 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.499 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:33.500 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:33.500 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:33.500 Found net devices under 0000:09:00.0: cvl_0_0 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:33.500 Found net devices under 0000:09:00.1: cvl_0_1 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:33.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:21:33.500 00:21:33.500 --- 10.0.0.2 ping statistics --- 00:21:33.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.500 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:21:33.500 00:21:33.500 --- 10.0.0.1 ping statistics --- 00:21:33.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.500 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2400225 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2400225 00:21:33.500 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2400225 ']' 00:21:33.501 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.501 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:33.501 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.501 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:33.501 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.501 [2024-10-17 16:49:46.894419] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:21:33.501 [2024-10-17 16:49:46.894493] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.501 [2024-10-17 16:49:46.960767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.501 [2024-10-17 16:49:47.020507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.501 [2024-10-17 16:49:47.020577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.501 [2024-10-17 16:49:47.020605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.501 [2024-10-17 16:49:47.020617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.501 [2024-10-17 16:49:47.020628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.501 [2024-10-17 16:49:47.022083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.501 [2024-10-17 16:49:47.022143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.501 [2024-10-17 16:49:47.022210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:33.501 [2024-10-17 16:49:47.022214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.501 [2024-10-17 16:49:47.176235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.501 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.762 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.762 Malloc1 00:21:33.762 [2024-10-17 16:49:47.272631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.762 Malloc2 00:21:33.762 Malloc3 00:21:33.762 Malloc4 00:21:33.762 Malloc5 00:21:34.026 Malloc6 00:21:34.026 Malloc7 00:21:34.026 Malloc8 00:21:34.026 Malloc9 00:21:34.026 Malloc10 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2400404 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2400404 /var/tmp/bdevperf.sock 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2400404 ']' 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.285 { 00:21:34.285 "params": { 00:21:34.285 "name": "Nvme$subsystem", 00:21:34.285 "trtype": "$TEST_TRANSPORT", 00:21:34.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.285 "adrfam": "ipv4", 00:21:34.285 "trsvcid": "$NVMF_PORT", 00:21:34.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.285 "hdgst": ${hdgst:-false}, 00:21:34.285 "ddgst": ${ddgst:-false} 00:21:34.285 }, 00:21:34.285 "method": "bdev_nvme_attach_controller" 00:21:34.285 } 00:21:34.285 EOF 00:21:34.285 )") 00:21:34.285 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.286 { 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme$subsystem", 00:21:34.286 "trtype": "$TEST_TRANSPORT", 00:21:34.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "$NVMF_PORT", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.286 "hdgst": ${hdgst:-false}, 00:21:34.286 "ddgst": ${ddgst:-false} 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 } 00:21:34.286 EOF 00:21:34.286 )") 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.286 { 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme$subsystem", 00:21:34.286 "trtype": "$TEST_TRANSPORT", 00:21:34.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "$NVMF_PORT", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.286 "hdgst": ${hdgst:-false}, 00:21:34.286 "ddgst": ${ddgst:-false} 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 } 00:21:34.286 EOF 00:21:34.286 )") 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.286 { 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme$subsystem", 00:21:34.286 "trtype": "$TEST_TRANSPORT", 00:21:34.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "$NVMF_PORT", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.286 "hdgst": ${hdgst:-false}, 00:21:34.286 "ddgst": ${ddgst:-false} 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 } 00:21:34.286 EOF 00:21:34.286 )") 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.286 { 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme$subsystem", 00:21:34.286 "trtype": "$TEST_TRANSPORT", 00:21:34.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "$NVMF_PORT", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.286 "hdgst": ${hdgst:-false}, 00:21:34.286 "ddgst": ${ddgst:-false} 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 } 00:21:34.286 EOF 00:21:34.286 )") 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.286 { 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme$subsystem", 00:21:34.286 "trtype": "$TEST_TRANSPORT", 00:21:34.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "$NVMF_PORT", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.286 "hdgst": ${hdgst:-false}, 00:21:34.286 "ddgst": ${ddgst:-false} 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 } 00:21:34.286 EOF 00:21:34.286 )") 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.286 { 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme$subsystem", 00:21:34.286 "trtype": "$TEST_TRANSPORT", 00:21:34.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "$NVMF_PORT", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.286 "hdgst": ${hdgst:-false}, 00:21:34.286 "ddgst": ${ddgst:-false} 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 } 00:21:34.286 EOF 00:21:34.286 )") 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.286 { 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme$subsystem", 00:21:34.286 "trtype": "$TEST_TRANSPORT", 00:21:34.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "$NVMF_PORT", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.286 "hdgst": ${hdgst:-false}, 00:21:34.286 "ddgst": ${ddgst:-false} 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 } 00:21:34.286 EOF 00:21:34.286 )") 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.286 { 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme$subsystem", 00:21:34.286 "trtype": "$TEST_TRANSPORT", 00:21:34.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "$NVMF_PORT", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.286 "hdgst": ${hdgst:-false}, 00:21:34.286 "ddgst": ${ddgst:-false} 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 } 00:21:34.286 EOF 00:21:34.286 )") 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.286 { 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme$subsystem", 00:21:34.286 "trtype": "$TEST_TRANSPORT", 00:21:34.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "$NVMF_PORT", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.286 "hdgst": ${hdgst:-false}, 00:21:34.286 "ddgst": ${ddgst:-false} 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 } 00:21:34.286 EOF 00:21:34.286 )") 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:21:34.286 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme1", 00:21:34.286 "trtype": "tcp", 00:21:34.286 "traddr": "10.0.0.2", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "4420", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.286 "hdgst": false, 00:21:34.286 "ddgst": false 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 },{ 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme2", 00:21:34.286 "trtype": "tcp", 00:21:34.286 "traddr": "10.0.0.2", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "4420", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:34.286 "hdgst": false, 00:21:34.286 "ddgst": false 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 },{ 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme3", 00:21:34.286 "trtype": "tcp", 00:21:34.286 "traddr": "10.0.0.2", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "4420", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:34.286 "hdgst": false, 00:21:34.286 "ddgst": false 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 },{ 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme4", 00:21:34.286 "trtype": "tcp", 00:21:34.286 "traddr": "10.0.0.2", 00:21:34.286 "adrfam": "ipv4", 00:21:34.286 "trsvcid": "4420", 00:21:34.286 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:34.286 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:34.286 "hdgst": false, 00:21:34.286 "ddgst": false 00:21:34.286 }, 00:21:34.286 "method": "bdev_nvme_attach_controller" 00:21:34.286 },{ 00:21:34.286 "params": { 00:21:34.286 "name": "Nvme5", 00:21:34.286 "trtype": "tcp", 00:21:34.286 "traddr": "10.0.0.2", 00:21:34.286 "adrfam": "ipv4", 00:21:34.287 "trsvcid": "4420", 00:21:34.287 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:34.287 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:34.287 "hdgst": false, 00:21:34.287 "ddgst": false 00:21:34.287 }, 00:21:34.287 "method": "bdev_nvme_attach_controller" 00:21:34.287 },{ 00:21:34.287 "params": { 00:21:34.287 "name": "Nvme6", 00:21:34.287 "trtype": "tcp", 00:21:34.287 "traddr": "10.0.0.2", 00:21:34.287 "adrfam": "ipv4", 00:21:34.287 "trsvcid": "4420", 00:21:34.287 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:34.287 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:34.287 "hdgst": false, 00:21:34.287 "ddgst": false 00:21:34.287 }, 00:21:34.287 "method": "bdev_nvme_attach_controller" 00:21:34.287 },{ 00:21:34.287 "params": { 00:21:34.287 "name": "Nvme7", 00:21:34.287 "trtype": "tcp", 00:21:34.287 "traddr": "10.0.0.2", 00:21:34.287 "adrfam": "ipv4", 00:21:34.287 "trsvcid": "4420", 00:21:34.287 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:34.287 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:34.287 "hdgst": false, 00:21:34.287 "ddgst": false 00:21:34.287 }, 00:21:34.287 "method": "bdev_nvme_attach_controller" 00:21:34.287 },{ 00:21:34.287 "params": { 00:21:34.287 "name": "Nvme8", 00:21:34.287 "trtype": "tcp", 00:21:34.287 "traddr": "10.0.0.2", 00:21:34.287 "adrfam": "ipv4", 00:21:34.287 "trsvcid": "4420", 00:21:34.287 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:34.287 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:34.287 "hdgst": false, 00:21:34.287 "ddgst": false 00:21:34.287 }, 00:21:34.287 "method": "bdev_nvme_attach_controller" 00:21:34.287 },{ 00:21:34.287 "params": { 00:21:34.287 "name": "Nvme9", 00:21:34.287 "trtype": "tcp", 00:21:34.287 "traddr": "10.0.0.2", 00:21:34.287 "adrfam": "ipv4", 00:21:34.287 "trsvcid": "4420", 00:21:34.287 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:34.287 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:34.287 "hdgst": false, 00:21:34.287 "ddgst": false 00:21:34.287 }, 00:21:34.287 "method": "bdev_nvme_attach_controller" 00:21:34.287 },{ 00:21:34.287 "params": { 00:21:34.287 "name": "Nvme10", 00:21:34.287 "trtype": "tcp", 00:21:34.287 "traddr": "10.0.0.2", 00:21:34.287 "adrfam": "ipv4", 00:21:34.287 "trsvcid": "4420", 00:21:34.287 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:34.287 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:34.287 "hdgst": false, 00:21:34.287 "ddgst": false 00:21:34.287 }, 00:21:34.287 "method": "bdev_nvme_attach_controller" 00:21:34.287 }' 00:21:34.287 [2024-10-17 16:49:47.792181] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:21:34.287 [2024-10-17 16:49:47.792261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400404 ] 00:21:34.287 [2024-10-17 16:49:47.853607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.287 [2024-10-17 16:49:47.913031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.186 Running I/O for 10 seconds... 00:21:36.445 16:49:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:36.445 16:49:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:36.445 16:49:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:36.445 16:49:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.445 16:49:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:36.445 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:36.703 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:36.703 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:36.703 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:36.703 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:36.703 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.703 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:36.703 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.703 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:36.703 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:36.703 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2400404 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2400404 ']' 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2400404 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:36.962 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2400404 00:21:37.221 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:37.221 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:37.221 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2400404' 00:21:37.221 killing process with pid 2400404 00:21:37.221 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2400404 00:21:37.221 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2400404 00:21:37.221 Received shutdown signal, test time was about 0.967372 seconds 00:21:37.221 00:21:37.221 Latency(us) 00:21:37.221 [2024-10-17T14:49:50.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.221 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.221 Verification LBA range: start 0x0 length 0x400 00:21:37.221 Nvme1n1 : 0.96 200.63 12.54 0.00 0.00 315329.30 21651.15 309135.74 00:21:37.221 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.221 Verification LBA range: start 0x0 length 0x400 00:21:37.221 Nvme2n1 : 0.94 203.76 12.73 0.00 0.00 304373.51 21554.06 302921.96 00:21:37.221 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.221 Verification LBA range: start 0x0 length 0x400 00:21:37.221 Nvme3n1 : 0.93 216.62 13.54 0.00 0.00 276187.20 7281.78 302921.96 00:21:37.221 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.221 Verification LBA range: start 0x0 length 0x400 00:21:37.221 Nvme4n1 : 0.92 207.72 12.98 0.00 0.00 286083.60 21748.24 301368.51 00:21:37.221 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.221 Verification LBA range: start 0x0 length 0x400 00:21:37.221 Nvme5n1 : 0.94 204.79 12.80 0.00 0.00 284399.94 42525.58 282727.16 00:21:37.221 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.221 Verification LBA range: start 0x0 length 0x400 00:21:37.221 Nvme6n1 : 0.95 206.59 12.91 0.00 0.00 275737.74 1353.20 304475.40 00:21:37.221 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.221 Verification LBA range: start 0x0 length 0x400 00:21:37.221 Nvme7n1 : 0.93 206.19 12.89 0.00 0.00 270145.61 23592.96 301368.51 00:21:37.221 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.221 Verification LBA range: start 0x0 length 0x400 00:21:37.221 Nvme8n1 : 0.95 201.79 12.61 0.00 0.00 270858.11 24758.04 301368.51 00:21:37.221 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.221 Verification LBA range: start 0x0 length 0x400 00:21:37.221 Nvme9n1 : 0.97 197.61 12.35 0.00 0.00 270491.99 21359.88 337097.77 00:21:37.221 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.221 Verification LBA range: start 0x0 length 0x400 00:21:37.221 Nvme10n1 : 0.96 199.25 12.45 0.00 0.00 261708.86 21068.61 307582.29 00:21:37.221 [2024-10-17T14:49:50.911Z] =================================================================================================================== 00:21:37.221 [2024-10-17T14:49:50.911Z] Total : 2044.95 127.81 0.00 0.00 281500.41 1353.20 337097.77 00:21:37.479 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:38.413 16:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2400225 00:21:38.413 16:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:38.413 16:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.413 rmmod nvme_tcp 00:21:38.413 rmmod nvme_fabrics 00:21:38.413 rmmod nvme_keyring 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 2400225 ']' 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 2400225 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2400225 ']' 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2400225 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:38.413 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2400225 00:21:38.671 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:38.671 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:38.671 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2400225' 00:21:38.671 killing process with pid 2400225 00:21:38.671 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2400225 00:21:38.671 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2400225 00:21:39.237 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:39.237 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:39.237 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:39.237 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:39.238 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:21:39.238 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:39.238 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:21:39.238 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:39.238 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:39.238 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.238 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.238 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:41.141 00:21:41.141 real 0m7.999s 00:21:41.141 user 0m25.062s 00:21:41.141 sys 0m1.522s 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:41.141 ************************************ 00:21:41.141 END TEST nvmf_shutdown_tc2 00:21:41.141 ************************************ 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:41.141 ************************************ 00:21:41.141 START TEST nvmf_shutdown_tc3 00:21:41.141 ************************************ 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:41.141 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:41.141 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.141 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:41.142 Found net devices under 0000:09:00.0: cvl_0_0 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:41.142 Found net devices under 0000:09:00.1: cvl_0_1 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.142 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:41.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:21:41.401 00:21:41.401 --- 10.0.0.2 ping statistics --- 00:21:41.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.401 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:21:41.401 00:21:41.401 --- 10.0.0.1 ping statistics --- 00:21:41.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.401 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=2401336 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 2401336 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2401336 ']' 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.401 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.401 [2024-10-17 16:49:54.972564] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:21:41.401 [2024-10-17 16:49:54.972650] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.401 [2024-10-17 16:49:55.044507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.675 [2024-10-17 16:49:55.103228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.675 [2024-10-17 16:49:55.103273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.675 [2024-10-17 16:49:55.103308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.675 [2024-10-17 16:49:55.103319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.675 [2024-10-17 16:49:55.103330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.675 [2024-10-17 16:49:55.106021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.675 [2024-10-17 16:49:55.106085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.675 [2024-10-17 16:49:55.106151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:41.675 [2024-10-17 16:49:55.106155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.675 [2024-10-17 16:49:55.256202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.675 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.675 Malloc1 00:21:41.934 [2024-10-17 16:49:55.368055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.934 Malloc2 00:21:41.934 Malloc3 00:21:41.934 Malloc4 00:21:41.934 Malloc5 00:21:41.934 Malloc6 00:21:41.934 Malloc7 00:21:42.193 Malloc8 00:21:42.193 Malloc9 00:21:42.193 Malloc10 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2401514 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2401514 /var/tmp/bdevperf.sock 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2401514 ']' 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:42.193 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:42.193 { 00:21:42.193 "params": { 00:21:42.193 "name": "Nvme$subsystem", 00:21:42.193 "trtype": "$TEST_TRANSPORT", 00:21:42.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.193 "adrfam": "ipv4", 00:21:42.193 "trsvcid": "$NVMF_PORT", 00:21:42.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.193 "hdgst": ${hdgst:-false}, 00:21:42.194 "ddgst": ${ddgst:-false} 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 } 00:21:42.194 EOF 00:21:42.194 )") 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:42.194 { 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme$subsystem", 00:21:42.194 "trtype": "$TEST_TRANSPORT", 00:21:42.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "$NVMF_PORT", 00:21:42.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.194 "hdgst": ${hdgst:-false}, 00:21:42.194 "ddgst": ${ddgst:-false} 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 } 00:21:42.194 EOF 00:21:42.194 )") 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:42.194 { 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme$subsystem", 00:21:42.194 "trtype": "$TEST_TRANSPORT", 00:21:42.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "$NVMF_PORT", 00:21:42.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.194 "hdgst": ${hdgst:-false}, 00:21:42.194 "ddgst": ${ddgst:-false} 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 } 00:21:42.194 EOF 00:21:42.194 )") 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:42.194 { 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme$subsystem", 00:21:42.194 "trtype": "$TEST_TRANSPORT", 00:21:42.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "$NVMF_PORT", 00:21:42.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.194 "hdgst": ${hdgst:-false}, 00:21:42.194 "ddgst": ${ddgst:-false} 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 } 00:21:42.194 EOF 00:21:42.194 )") 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:42.194 { 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme$subsystem", 00:21:42.194 "trtype": "$TEST_TRANSPORT", 00:21:42.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "$NVMF_PORT", 00:21:42.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.194 "hdgst": ${hdgst:-false}, 00:21:42.194 "ddgst": ${ddgst:-false} 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 } 00:21:42.194 EOF 00:21:42.194 )") 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:42.194 { 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme$subsystem", 00:21:42.194 "trtype": "$TEST_TRANSPORT", 00:21:42.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "$NVMF_PORT", 00:21:42.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.194 "hdgst": ${hdgst:-false}, 00:21:42.194 "ddgst": ${ddgst:-false} 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 } 00:21:42.194 EOF 00:21:42.194 )") 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:42.194 { 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme$subsystem", 00:21:42.194 "trtype": "$TEST_TRANSPORT", 00:21:42.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "$NVMF_PORT", 00:21:42.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.194 "hdgst": ${hdgst:-false}, 00:21:42.194 "ddgst": ${ddgst:-false} 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 } 00:21:42.194 EOF 00:21:42.194 )") 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:42.194 { 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme$subsystem", 00:21:42.194 "trtype": "$TEST_TRANSPORT", 00:21:42.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "$NVMF_PORT", 00:21:42.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.194 "hdgst": ${hdgst:-false}, 00:21:42.194 "ddgst": ${ddgst:-false} 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 } 00:21:42.194 EOF 00:21:42.194 )") 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:42.194 { 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme$subsystem", 00:21:42.194 "trtype": "$TEST_TRANSPORT", 00:21:42.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "$NVMF_PORT", 00:21:42.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.194 "hdgst": ${hdgst:-false}, 00:21:42.194 "ddgst": ${ddgst:-false} 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 } 00:21:42.194 EOF 00:21:42.194 )") 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:42.194 { 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme$subsystem", 00:21:42.194 "trtype": "$TEST_TRANSPORT", 00:21:42.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "$NVMF_PORT", 00:21:42.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.194 "hdgst": ${hdgst:-false}, 00:21:42.194 "ddgst": ${ddgst:-false} 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 } 00:21:42.194 EOF 00:21:42.194 )") 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:21:42.194 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme1", 00:21:42.194 "trtype": "tcp", 00:21:42.194 "traddr": "10.0.0.2", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "4420", 00:21:42.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.194 "hdgst": false, 00:21:42.194 "ddgst": false 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 },{ 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme2", 00:21:42.194 "trtype": "tcp", 00:21:42.194 "traddr": "10.0.0.2", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "4420", 00:21:42.194 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:42.194 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:42.194 "hdgst": false, 00:21:42.194 "ddgst": false 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 },{ 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme3", 00:21:42.194 "trtype": "tcp", 00:21:42.194 "traddr": "10.0.0.2", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "4420", 00:21:42.194 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:42.194 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:42.194 "hdgst": false, 00:21:42.194 "ddgst": false 00:21:42.194 }, 00:21:42.194 "method": "bdev_nvme_attach_controller" 00:21:42.194 },{ 00:21:42.194 "params": { 00:21:42.194 "name": "Nvme4", 00:21:42.194 "trtype": "tcp", 00:21:42.194 "traddr": "10.0.0.2", 00:21:42.194 "adrfam": "ipv4", 00:21:42.194 "trsvcid": "4420", 00:21:42.195 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:42.195 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:42.195 "hdgst": false, 00:21:42.195 "ddgst": false 00:21:42.195 }, 00:21:42.195 "method": "bdev_nvme_attach_controller" 00:21:42.195 },{ 00:21:42.195 "params": { 00:21:42.195 "name": "Nvme5", 00:21:42.195 "trtype": "tcp", 00:21:42.195 "traddr": "10.0.0.2", 00:21:42.195 "adrfam": "ipv4", 00:21:42.195 "trsvcid": "4420", 00:21:42.195 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:42.195 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:42.195 "hdgst": false, 00:21:42.195 "ddgst": false 00:21:42.195 }, 00:21:42.195 "method": "bdev_nvme_attach_controller" 00:21:42.195 },{ 00:21:42.195 "params": { 00:21:42.195 "name": "Nvme6", 00:21:42.195 "trtype": "tcp", 00:21:42.195 "traddr": "10.0.0.2", 00:21:42.195 "adrfam": "ipv4", 00:21:42.195 "trsvcid": "4420", 00:21:42.195 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:42.195 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:42.195 "hdgst": false, 00:21:42.195 "ddgst": false 00:21:42.195 }, 00:21:42.195 "method": "bdev_nvme_attach_controller" 00:21:42.195 },{ 00:21:42.195 "params": { 00:21:42.195 "name": "Nvme7", 00:21:42.195 "trtype": "tcp", 00:21:42.195 "traddr": "10.0.0.2", 00:21:42.195 "adrfam": "ipv4", 00:21:42.195 "trsvcid": "4420", 00:21:42.195 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:42.195 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:42.195 "hdgst": false, 00:21:42.195 "ddgst": false 00:21:42.195 }, 00:21:42.195 "method": "bdev_nvme_attach_controller" 00:21:42.195 },{ 00:21:42.195 "params": { 00:21:42.195 "name": "Nvme8", 00:21:42.195 "trtype": "tcp", 00:21:42.195 "traddr": "10.0.0.2", 00:21:42.195 "adrfam": "ipv4", 00:21:42.195 "trsvcid": "4420", 00:21:42.195 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:42.195 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:42.195 "hdgst": false, 00:21:42.195 "ddgst": false 00:21:42.195 }, 00:21:42.195 "method": "bdev_nvme_attach_controller" 00:21:42.195 },{ 00:21:42.195 "params": { 00:21:42.195 "name": "Nvme9", 00:21:42.195 "trtype": "tcp", 00:21:42.195 "traddr": "10.0.0.2", 00:21:42.195 "adrfam": "ipv4", 00:21:42.195 "trsvcid": "4420", 00:21:42.195 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:42.195 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:42.195 "hdgst": false, 00:21:42.195 "ddgst": false 00:21:42.195 }, 00:21:42.195 "method": "bdev_nvme_attach_controller" 00:21:42.195 },{ 00:21:42.195 "params": { 00:21:42.195 "name": "Nvme10", 00:21:42.195 "trtype": "tcp", 00:21:42.195 "traddr": "10.0.0.2", 00:21:42.195 "adrfam": "ipv4", 00:21:42.195 "trsvcid": "4420", 00:21:42.195 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:42.195 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:42.195 "hdgst": false, 00:21:42.195 "ddgst": false 00:21:42.195 }, 00:21:42.195 "method": "bdev_nvme_attach_controller" 00:21:42.195 }' 00:21:42.195 [2024-10-17 16:49:55.881243] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:21:42.195 [2024-10-17 16:49:55.881354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401514 ] 00:21:42.454 [2024-10-17 16:49:55.942732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.454 [2024-10-17 16:49:56.002246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.354 Running I/O for 10 seconds... 00:21:44.354 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.354 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:44.354 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:44.354 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.354 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:44.354 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.613 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:44.613 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:44.613 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:44.872 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:44.872 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:44.872 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:44.872 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:44.872 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.872 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:44.872 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.872 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:44.872 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:44.872 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2401336 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2401336 ']' 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2401336 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2401336 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2401336' 00:21:45.148 killing process with pid 2401336 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2401336 00:21:45.148 16:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2401336 00:21:45.148 [2024-10-17 16:49:58.677428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.148 [2024-10-17 16:49:58.677700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.677712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.677723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.677734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.677745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.677757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.677768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824260 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.678995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.679466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826300 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.149 [2024-10-17 16:49:58.680930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.680942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.680953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.680965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.680977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.680995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.681451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824730 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.150 [2024-10-17 16:49:58.683789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.683995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824c00 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.685893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18250f0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.686732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.686759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.686772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.686784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.686796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.686808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.151 [2024-10-17 16:49:58.686825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.686995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.687520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18255c0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.152 [2024-10-17 16:49:58.688764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.688999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.689378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825ab0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.690859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.690900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.690919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.690934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.690948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.690962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.690976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.690997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.691019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa53c0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.691078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.691115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.691134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.691148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.691161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.691175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.691194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.691209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.691221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4260 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.691291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.691312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.691345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.691361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.691375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.691388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.691402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.691415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.691428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24246f0 is same with the state(6) to be set 00:21:45.153 [2024-10-17 16:49:58.691503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.691524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.691539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.691552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.691566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.153 [2024-10-17 16:49:58.691579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.153 [2024-10-17 16:49:58.691592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.154 [2024-10-17 16:49:58.691606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.691618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf7b0 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.691663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.154 [2024-10-17 16:49:58.691683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.691698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.154 [2024-10-17 16:49:58.691711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.691730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.154 [2024-10-17 16:49:58.691743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.691758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.154 [2024-10-17 16:49:58.691771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.691783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fab990 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.691827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.154 [2024-10-17 16:49:58.691847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.691861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.154 [2024-10-17 16:49:58.691874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.691888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.154 [2024-10-17 16:49:58.691901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.691914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.154 [2024-10-17 16:49:58.691927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.691940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa6340 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 [2024-10-17 16:49:58.692069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.692097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 [2024-10-17 16:49:58.692112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.692146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 [2024-10-17 16:49:58.692161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.692173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-10-17 16:49:58.692186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.692206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 [2024-10-17 16:49:58.692232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.692244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 [2024-10-17 16:49:58.692256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 16:49:58.692268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 [2024-10-17 16:49:58.692294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.692306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1[2024-10-17 16:49:58.692318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with [2024-10-17 16:49:58.692331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:45.154 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.692345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 [2024-10-17 16:49:58.692368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with [2024-10-17 16:49:58.692369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:45.154 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.692381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1[2024-10-17 16:49:58.692393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.692432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 [2024-10-17 16:49:58.692444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.692456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-10-17 16:49:58.692468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 16:49:58.692482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 [2024-10-17 16:49:58.692516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.692528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 [2024-10-17 16:49:58.692540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 16:49:58.692552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 [2024-10-17 16:49:58.692576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.154 [2024-10-17 16:49:58.692589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.154 [2024-10-17 16:49:58.692597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.154 [2024-10-17 16:49:58.692605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.692617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128[2024-10-17 16:49:58.692629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with [2024-10-17 16:49:58.692643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:45.155 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.692657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.692669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.692680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with [2024-10-17 16:49:58.692692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128the state(6) to be set 00:21:45.155 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.692706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.692718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.692730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.692742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128[2024-10-17 16:49:58.692765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with [2024-10-17 16:49:58.692780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:45.155 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.692796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.692808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.692820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128[2024-10-17 16:49:58.692832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with [2024-10-17 16:49:58.692847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:45.155 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.692860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.692872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.692884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with [2024-10-17 16:49:58.692895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128the state(6) to be set 00:21:45.155 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.692909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825f80 is same with the state(6) to be set 00:21:45.155 [2024-10-17 16:49:58.692911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.692927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.692941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.692957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.692971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.692996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.155 [2024-10-17 16:49:58.693568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.155 [2024-10-17 16:49:58.693583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.693952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.693967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.693974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.693981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.693995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with [2024-10-17 16:49:58.693997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:12the state(6) to be set 00:21:45.156 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.694019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with [2024-10-17 16:49:58.694021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:45.156 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.694034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.694047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.694060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:12[2024-10-17 16:49:58.694072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with [2024-10-17 16:49:58.694088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:45.156 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.694102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.156 [2024-10-17 16:49:58.694114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.156 [2024-10-17 16:49:58.694127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694225] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21b3820 was disconnected and freed. reset controller. 00:21:45.156 [2024-10-17 16:49:58.694233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.156 [2024-10-17 16:49:58.694619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.694631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.694648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.694662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.694674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.694685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.694697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5ea0 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.694934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.694957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.694978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.694996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:1[2024-10-17 16:49:58.695465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 16:49:58.695479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 16:49:58.695543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with [2024-10-17 16:49:58.695558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1the state(6) to be set 00:21:45.157 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with [2024-10-17 16:49:58.695576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:45.157 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1[2024-10-17 16:49:58.695625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with [2024-10-17 16:49:58.695640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:45.157 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.157 [2024-10-17 16:49:58.695652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.157 [2024-10-17 16:49:58.695664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.157 [2024-10-17 16:49:58.695671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.695676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1[2024-10-17 16:49:58.695688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with [2024-10-17 16:49:58.695703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:45.158 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.695717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.695729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.695740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.695756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.695768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.695792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.695804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.695816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 16:49:58.695828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.695860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 16:49:58.695860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.695887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.695899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.695911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 16:49:58.695923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.695947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.695961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1[2024-10-17 16:49:58.695973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.695997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.695997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1[2024-10-17 16:49:58.696022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with [2024-10-17 16:49:58.696038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:45.158 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.696052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.696064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.696077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.696089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 16:49:58.696101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.696127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.696139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.696155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.696168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.696181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 16:49:58.696193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.696217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.696229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.696241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 16:49:58.696253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b6370 is same with the state(6) to be set 00:21:45.158 [2024-10-17 16:49:58.696274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.696295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.696312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.696335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.696360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.158 [2024-10-17 16:49:58.696375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.158 [2024-10-17 16:49:58.696390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.159 [2024-10-17 16:49:58.696943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.696978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:45.159 [2024-10-17 16:49:58.697058] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23b2b30 was disconnected and freed. reset controller. 00:21:45.159 [2024-10-17 16:49:58.700019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:45.159 [2024-10-17 16:49:58.700053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:45.159 [2024-10-17 16:49:58.700108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d0310 (9): Bad file descriptor 00:21:45.159 [2024-10-17 16:49:58.700134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1faf7b0 (9): Bad file descriptor 00:21:45.159 [2024-10-17 16:49:58.701804] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.159 [2024-10-17 16:49:58.701947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.159 [2024-10-17 16:49:58.701976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1faf7b0 with addr=10.0.0.2, port=4420 00:21:45.159 [2024-10-17 16:49:58.702011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf7b0 is same with the state(6) to be set 00:21:45.159 [2024-10-17 16:49:58.702106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.159 [2024-10-17 16:49:58.702132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d0310 with addr=10.0.0.2, port=4420 00:21:45.159 [2024-10-17 16:49:58.702147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0310 is same with the state(6) to be set 00:21:45.159 [2024-10-17 16:49:58.702169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa53c0 (9): Bad file descriptor 00:21:45.159 [2024-10-17 16:49:58.702203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4260 (9): Bad file descriptor 00:21:45.159 [2024-10-17 16:49:58.702260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.159 [2024-10-17 16:49:58.702298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.702314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.159 [2024-10-17 16:49:58.702328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.702342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.159 [2024-10-17 16:49:58.702356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.702369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.159 [2024-10-17 16:49:58.702383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.702395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f181e0 is same with the state(6) to be set 00:21:45.159 [2024-10-17 16:49:58.702443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.159 [2024-10-17 16:49:58.702464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.702478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.159 [2024-10-17 16:49:58.702491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.702505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.159 [2024-10-17 16:49:58.702518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.702532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.159 [2024-10-17 16:49:58.702545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.702557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f5350 is same with the state(6) to be set 00:21:45.159 [2024-10-17 16:49:58.702586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24246f0 (9): Bad file descriptor 00:21:45.159 [2024-10-17 16:49:58.702639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.159 [2024-10-17 16:49:58.702662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.702689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.159 [2024-10-17 16:49:58.702705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.702719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.159 [2024-10-17 16:49:58.702732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.702745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.159 [2024-10-17 16:49:58.702759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.159 [2024-10-17 16:49:58.702776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f4600 is same with the state(6) to be set 00:21:45.159 [2024-10-17 16:49:58.702811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fab990 (9): Bad file descriptor 00:21:45.159 [2024-10-17 16:49:58.702841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa6340 (9): Bad file descriptor 00:21:45.159 [2024-10-17 16:49:58.702948] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.159 [2024-10-17 16:49:58.703046] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.159 [2024-10-17 16:49:58.703127] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.159 [2024-10-17 16:49:58.703373] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.159 [2024-10-17 16:49:58.703451] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.159 [2024-10-17 16:49:58.703540] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.160 [2024-10-17 16:49:58.703618] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.160 [2024-10-17 16:49:58.703674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1faf7b0 (9): Bad file descriptor 00:21:45.160 [2024-10-17 16:49:58.703700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d0310 (9): Bad file descriptor 00:21:45.160 [2024-10-17 16:49:58.703785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.703807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.703830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.703846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.703863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.703877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.703893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.703907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.703923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.703938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.703953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.703967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.703983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.704983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.704998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.705021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.160 [2024-10-17 16:49:58.705037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.160 [2024-10-17 16:49:58.705051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.161 [2024-10-17 16:49:58.705742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.161 [2024-10-17 16:49:58.705756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b0030 is same with the state(6) to be set 00:21:45.161 [2024-10-17 16:49:58.705856] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23b0030 was disconnected and freed. reset controller. 00:21:45.161 [2024-10-17 16:49:58.705949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:45.161 [2024-10-17 16:49:58.705971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:45.161 [2024-10-17 16:49:58.705997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:45.161 [2024-10-17 16:49:58.706024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:45.161 [2024-10-17 16:49:58.706038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:45.161 [2024-10-17 16:49:58.706050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:45.161 [2024-10-17 16:49:58.707273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.161 [2024-10-17 16:49:58.707302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.161 [2024-10-17 16:49:58.707316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:45.161 [2024-10-17 16:49:58.707525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.161 [2024-10-17 16:49:58.707554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d4260 with addr=10.0.0.2, port=4420 00:21:45.161 [2024-10-17 16:49:58.707571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4260 is same with the state(6) to be set 00:21:45.161 [2024-10-17 16:49:58.707918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4260 (9): Bad file descriptor 00:21:45.161 [2024-10-17 16:49:58.707988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:45.161 [2024-10-17 16:49:58.708025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:45.161 [2024-10-17 16:49:58.708040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:45.161 [2024-10-17 16:49:58.708107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.161 [2024-10-17 16:49:58.710842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:45.161 [2024-10-17 16:49:58.710871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:45.161 [2024-10-17 16:49:58.711043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.161 [2024-10-17 16:49:58.711070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d0310 with addr=10.0.0.2, port=4420 00:21:45.161 [2024-10-17 16:49:58.711087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0310 is same with the state(6) to be set 00:21:45.161 [2024-10-17 16:49:58.711195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.161 [2024-10-17 16:49:58.711220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1faf7b0 with addr=10.0.0.2, port=4420 00:21:45.161 [2024-10-17 16:49:58.711236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf7b0 is same with the state(6) to be set 00:21:45.161 [2024-10-17 16:49:58.711295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d0310 (9): Bad file descriptor 00:21:45.161 [2024-10-17 16:49:58.711318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1faf7b0 (9): Bad file descriptor 00:21:45.161 [2024-10-17 16:49:58.711369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:45.161 [2024-10-17 16:49:58.711386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:45.161 [2024-10-17 16:49:58.711399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:45.161 [2024-10-17 16:49:58.711418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:45.162 [2024-10-17 16:49:58.711431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:45.162 [2024-10-17 16:49:58.711444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:45.162 [2024-10-17 16:49:58.711498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.162 [2024-10-17 16:49:58.711516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.162 [2024-10-17 16:49:58.711894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f181e0 (9): Bad file descriptor 00:21:45.162 [2024-10-17 16:49:58.711927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f5350 (9): Bad file descriptor 00:21:45.162 [2024-10-17 16:49:58.711963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f4600 (9): Bad file descriptor 00:21:45.162 [2024-10-17 16:49:58.712133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.712973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.712991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.713015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.713031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.713048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.713062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.713077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.713091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.713107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.713121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.713136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.713150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.713165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.713179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.713194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.713208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.713224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.713237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.713253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.713267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.713282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.713296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.162 [2024-10-17 16:49:58.713311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.162 [2024-10-17 16:49:58.713326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.713981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.713997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.714018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.714034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.714048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.714063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.714077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.714092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b4a30 is same with the state(6) to be set 00:21:45.163 [2024-10-17 16:49:58.715366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.163 [2024-10-17 16:49:58.715852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.163 [2024-10-17 16:49:58.715868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.715882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.715898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.715912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.715927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.715941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.715957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.715970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.715986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.716980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.716995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.717024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.717041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.164 [2024-10-17 16:49:58.717054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.164 [2024-10-17 16:49:58.717070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.717083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.717099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.717112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.717127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.717141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.717156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.717170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.717186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.717199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.717215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.717229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.717245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.717258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.717278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.717292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.717307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ad5f0 is same with the state(6) to be set 00:21:45.165 [2024-10-17 16:49:58.718546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.718976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.718996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.719019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.719034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.719049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.719063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.719078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.719092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.719108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.719121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.719137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.719150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.719166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.719180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.719195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.719208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.719225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.719238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.719254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.719268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.719288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.719306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.165 [2024-10-17 16:49:58.719322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.165 [2024-10-17 16:49:58.719336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.719984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.719997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.720496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.720509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aeb00 is same with the state(6) to be set 00:21:45.166 [2024-10-17 16:49:58.721782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.721805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.721826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.166 [2024-10-17 16:49:58.721841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.166 [2024-10-17 16:49:58.721857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.721870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.721886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.721899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.721914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.721928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.721943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.721957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.721971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.721985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.722982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.722997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.167 [2024-10-17 16:49:58.723018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.167 [2024-10-17 16:49:58.723034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.723709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.723723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b6a70 is same with the state(6) to be set 00:21:45.168 [2024-10-17 16:49:58.724929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:45.168 [2024-10-17 16:49:58.724960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:45.168 [2024-10-17 16:49:58.724978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:45.168 [2024-10-17 16:49:58.724995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:45.168 [2024-10-17 16:49:58.725404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.168 [2024-10-17 16:49:58.725435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa6340 with addr=10.0.0.2, port=4420 00:21:45.168 [2024-10-17 16:49:58.725452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa6340 is same with the state(6) to be set 00:21:45.168 [2024-10-17 16:49:58.725560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.168 [2024-10-17 16:49:58.725585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab990 with addr=10.0.0.2, port=4420 00:21:45.168 [2024-10-17 16:49:58.725600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fab990 is same with the state(6) to be set 00:21:45.168 [2024-10-17 16:49:58.725676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.168 [2024-10-17 16:49:58.725700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa53c0 with addr=10.0.0.2, port=4420 00:21:45.168 [2024-10-17 16:49:58.725715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa53c0 is same with the state(6) to be set 00:21:45.168 [2024-10-17 16:49:58.725810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.168 [2024-10-17 16:49:58.725835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24246f0 with addr=10.0.0.2, port=4420 00:21:45.168 [2024-10-17 16:49:58.725850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24246f0 is same with the state(6) to be set 00:21:45.168 [2024-10-17 16:49:58.726734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.726758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.726787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.726804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.726820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.726835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.726852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.726866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.726881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.726895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.726910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.726925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.726941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.726955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.726971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.726984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.727008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.727025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.727041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.727055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.727070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.727084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.727100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.168 [2024-10-17 16:49:58.727114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.168 [2024-10-17 16:49:58.727130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.727972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.727997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.728018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.728034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.728048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.728064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.728079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.728104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.728119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.728134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.728148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.728164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.728177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.728193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.728206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.728222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.728236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.728252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.169 [2024-10-17 16:49:58.728265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.169 [2024-10-17 16:49:58.728281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.728731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.728746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b15b0 is same with the state(6) to be set 00:21:45.170 [2024-10-17 16:49:58.730023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.170 [2024-10-17 16:49:58.730641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.170 [2024-10-17 16:49:58.730655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.730671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.730685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.730701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.730715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.730732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.730746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.730770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.730786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.730802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.730817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.730832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.730847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.730862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.730876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.730892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.730906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.730922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.730936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.730952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.730966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.730982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.730996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.171 [2024-10-17 16:49:58.731819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.171 [2024-10-17 16:49:58.731833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b40b0 is same with the state(6) to be set 00:21:45.172 [2024-10-17 16:49:58.733062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.733972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.733988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.734010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.734027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.734041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.734058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.734072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.734087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.734102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.734118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.734132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.734147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.734162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.734178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.734192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.734207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.734221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.734238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.734252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.734268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.734282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.734302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.734316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.172 [2024-10-17 16:49:58.734332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.172 [2024-10-17 16:49:58.734346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.734973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.734988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.173 [2024-10-17 16:49:58.735008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.173 [2024-10-17 16:49:58.735023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b54f0 is same with the state(6) to be set 00:21:45.173 [2024-10-17 16:49:58.737486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:45.173 [2024-10-17 16:49:58.737528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:45.173 [2024-10-17 16:49:58.737548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:45.173 [2024-10-17 16:49:58.737565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:45.173 [2024-10-17 16:49:58.737582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:45.173 task offset: 30720 on job bdev=Nvme1n1 fails 00:21:45.173 00:21:45.173 Latency(us) 00:21:45.173 [2024-10-17T14:49:58.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.173 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.173 Job: Nvme1n1 ended in about 0.91 seconds with error 00:21:45.173 Verification LBA range: start 0x0 length 0x400 00:21:45.173 Nvme1n1 : 0.91 211.27 13.20 70.42 0.00 224727.61 19223.89 251658.24 00:21:45.173 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.173 Job: Nvme2n1 ended in about 0.93 seconds with error 00:21:45.173 Verification LBA range: start 0x0 length 0x400 00:21:45.173 Nvme2n1 : 0.93 138.30 8.64 69.15 0.00 299505.59 21942.42 276513.37 00:21:45.173 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.173 Job: Nvme3n1 ended in about 0.93 seconds with error 00:21:45.173 Verification LBA range: start 0x0 length 0x400 00:21:45.173 Nvme3n1 : 0.93 206.73 12.92 68.91 0.00 220918.33 20777.34 246997.90 00:21:45.173 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.173 Job: Nvme4n1 ended in about 0.93 seconds with error 00:21:45.173 Verification LBA range: start 0x0 length 0x400 00:21:45.173 Nvme4n1 : 0.93 206.02 12.88 68.67 0.00 217274.97 17573.36 257872.02 00:21:45.173 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.173 Job: Nvme5n1 ended in about 0.92 seconds with error 00:21:45.173 Verification LBA range: start 0x0 length 0x400 00:21:45.173 Nvme5n1 : 0.92 143.86 8.99 69.75 0.00 273238.06 36311.80 236123.78 00:21:45.173 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.173 Job: Nvme6n1 ended in about 0.94 seconds with error 00:21:45.173 Verification LBA range: start 0x0 length 0x400 00:21:45.173 Nvme6n1 : 0.94 136.14 8.51 68.07 0.00 280671.83 20388.98 264085.81 00:21:45.173 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.173 Job: Nvme7n1 ended in about 0.91 seconds with error 00:21:45.173 Verification LBA range: start 0x0 length 0x400 00:21:45.173 Nvme7n1 : 0.91 210.96 13.19 70.32 0.00 198443.43 8301.23 259425.47 00:21:45.173 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.173 Job: Nvme8n1 ended in about 0.94 seconds with error 00:21:45.173 Verification LBA range: start 0x0 length 0x400 00:21:45.173 Nvme8n1 : 0.94 141.01 8.81 62.55 0.00 268941.59 16214.09 243891.01 00:21:45.173 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.173 Job: Nvme9n1 ended in about 0.95 seconds with error 00:21:45.173 Verification LBA range: start 0x0 length 0x400 00:21:45.173 Nvme9n1 : 0.95 138.42 8.65 67.62 0.00 261115.09 20194.80 285834.05 00:21:45.173 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.173 Job: Nvme10n1 ended in about 0.94 seconds with error 00:21:45.173 Verification LBA range: start 0x0 length 0x400 00:21:45.173 Nvme10n1 : 0.94 140.08 8.76 68.44 0.00 251720.98 21456.97 267192.70 00:21:45.173 [2024-10-17T14:49:58.863Z] =================================================================================================================== 00:21:45.173 [2024-10-17T14:49:58.863Z] Total : 1672.79 104.55 683.91 0.00 245698.91 8301.23 285834.05 00:21:45.173 [2024-10-17 16:49:58.764772] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:45.173 [2024-10-17 16:49:58.764951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa6340 (9): Bad file descriptor 00:21:45.173 [2024-10-17 16:49:58.765017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fab990 (9): Bad file descriptor 00:21:45.173 [2024-10-17 16:49:58.765039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa53c0 (9): Bad file descriptor 00:21:45.173 [2024-10-17 16:49:58.765057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24246f0 (9): Bad file descriptor 00:21:45.173 [2024-10-17 16:49:58.765115] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:45.173 [2024-10-17 16:49:58.765139] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:45.173 [2024-10-17 16:49:58.765157] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:45.173 [2024-10-17 16:49:58.765174] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:45.173 [2024-10-17 16:49:58.765191] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:45.173 [2024-10-17 16:49:58.765334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:45.174 [2024-10-17 16:49:58.765591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.174 [2024-10-17 16:49:58.765625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d4260 with addr=10.0.0.2, port=4420 00:21:45.174 [2024-10-17 16:49:58.765645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4260 is same with the state(6) to be set 00:21:45.174 [2024-10-17 16:49:58.765744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.174 [2024-10-17 16:49:58.765771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1faf7b0 with addr=10.0.0.2, port=4420 00:21:45.174 [2024-10-17 16:49:58.765787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf7b0 is same with the state(6) to be set 00:21:45.174 [2024-10-17 16:49:58.765881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.174 [2024-10-17 16:49:58.765907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d0310 with addr=10.0.0.2, port=4420 00:21:45.174 [2024-10-17 16:49:58.765922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d0310 is same with the state(6) to be set 00:21:45.174 [2024-10-17 16:49:58.766029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.174 [2024-10-17 16:49:58.766066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f181e0 with addr=10.0.0.2, port=4420 00:21:45.174 [2024-10-17 16:49:58.766082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f181e0 is same with the state(6) to be set 00:21:45.174 [2024-10-17 16:49:58.766165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.174 [2024-10-17 16:49:58.766191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f5350 with addr=10.0.0.2, port=4420 00:21:45.174 [2024-10-17 16:49:58.766207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f5350 is same with the state(6) to be set 00:21:45.174 [2024-10-17 16:49:58.766234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:45.174 [2024-10-17 16:49:58.766247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:45.174 [2024-10-17 16:49:58.766263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:45.174 [2024-10-17 16:49:58.766284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:45.174 [2024-10-17 16:49:58.766308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:45.174 [2024-10-17 16:49:58.766321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:45.174 [2024-10-17 16:49:58.766343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:45.174 [2024-10-17 16:49:58.766363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:45.174 [2024-10-17 16:49:58.766375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:45.174 [2024-10-17 16:49:58.766392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:45.174 [2024-10-17 16:49:58.766405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:45.174 [2024-10-17 16:49:58.766418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:45.174 [2024-10-17 16:49:58.766454] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:45.174 [2024-10-17 16:49:58.766477] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:45.174 [2024-10-17 16:49:58.766497] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:45.174 [2024-10-17 16:49:58.766515] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:45.174 [2024-10-17 16:49:58.767464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.174 [2024-10-17 16:49:58.767488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.174 [2024-10-17 16:49:58.767501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.174 [2024-10-17 16:49:58.767513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.174 [2024-10-17 16:49:58.767586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.174 [2024-10-17 16:49:58.767612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f4600 with addr=10.0.0.2, port=4420 00:21:45.174 [2024-10-17 16:49:58.767628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f4600 is same with the state(6) to be set 00:21:45.174 [2024-10-17 16:49:58.767647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d4260 (9): Bad file descriptor 00:21:45.174 [2024-10-17 16:49:58.767665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1faf7b0 (9): Bad file descriptor 00:21:45.174 [2024-10-17 16:49:58.767682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d0310 (9): Bad file descriptor 00:21:45.174 [2024-10-17 16:49:58.767699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f181e0 (9): Bad file descriptor 00:21:45.174 [2024-10-17 16:49:58.767716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f5350 (9): Bad file descriptor 00:21:45.174 [2024-10-17 16:49:58.768256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f4600 (9): Bad file descriptor 00:21:45.174 [2024-10-17 16:49:58.768294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:45.174 [2024-10-17 16:49:58.768308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:45.174 [2024-10-17 16:49:58.768321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:45.174 [2024-10-17 16:49:58.768338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:45.174 [2024-10-17 16:49:58.768351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:45.174 [2024-10-17 16:49:58.768363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:45.174 [2024-10-17 16:49:58.768378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:45.174 [2024-10-17 16:49:58.768397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:45.174 [2024-10-17 16:49:58.768410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:45.174 [2024-10-17 16:49:58.768426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:45.174 [2024-10-17 16:49:58.768439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:45.174 [2024-10-17 16:49:58.768451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:45.174 [2024-10-17 16:49:58.768466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:45.174 [2024-10-17 16:49:58.768478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:45.174 [2024-10-17 16:49:58.768490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:45.174 [2024-10-17 16:49:58.768550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.174 [2024-10-17 16:49:58.768568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.174 [2024-10-17 16:49:58.768580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.174 [2024-10-17 16:49:58.768591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.174 [2024-10-17 16:49:58.768602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.174 [2024-10-17 16:49:58.768614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:45.174 [2024-10-17 16:49:58.768626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:45.174 [2024-10-17 16:49:58.768638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:45.174 [2024-10-17 16:49:58.768674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:45.744 16:49:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2401514 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2401514 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2401514 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:46.795 rmmod nvme_tcp 00:21:46.795 rmmod nvme_fabrics 00:21:46.795 rmmod nvme_keyring 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 2401336 ']' 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 2401336 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2401336 ']' 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2401336 00:21:46.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2401336) - No such process 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 2401336 is not found' 00:21:46.795 Process with pid 2401336 is not found 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:21:46.795 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:46.796 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:21:46.796 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:46.796 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:46.796 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.796 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.796 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:48.698 00:21:48.698 real 0m7.595s 00:21:48.698 user 0m19.026s 00:21:48.698 sys 0m1.446s 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:48.698 ************************************ 00:21:48.698 END TEST nvmf_shutdown_tc3 00:21:48.698 ************************************ 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:48.698 ************************************ 00:21:48.698 START TEST nvmf_shutdown_tc4 00:21:48.698 ************************************ 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:48.698 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:48.698 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.698 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:48.699 Found net devices under 0000:09:00.0: cvl_0_0 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:48.699 Found net devices under 0000:09:00.1: cvl_0_1 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:48.699 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:48.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:21:48.958 00:21:48.958 --- 10.0.0.2 ping statistics --- 00:21:48.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.958 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:21:48.958 00:21:48.958 --- 10.0.0.1 ping statistics --- 00:21:48.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.958 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=2402437 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 2402437 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 2402437 ']' 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.958 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.959 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.959 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:48.959 [2024-10-17 16:50:02.587218] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:21:48.959 [2024-10-17 16:50:02.587299] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.217 [2024-10-17 16:50:02.656704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.217 [2024-10-17 16:50:02.719578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.217 [2024-10-17 16:50:02.719639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.217 [2024-10-17 16:50:02.719655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.217 [2024-10-17 16:50:02.719668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.217 [2024-10-17 16:50:02.719679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.217 [2024-10-17 16:50:02.721323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.217 [2024-10-17 16:50:02.721435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.217 [2024-10-17 16:50:02.721504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:49.217 [2024-10-17 16:50:02.721508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.217 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.217 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:21:49.217 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:49.217 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.217 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.217 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.217 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.218 [2024-10-17 16:50:02.869708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.218 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.477 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.477 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.477 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.477 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.477 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:49.477 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.477 16:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.477 Malloc1 00:21:49.477 [2024-10-17 16:50:02.967689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.477 Malloc2 00:21:49.477 Malloc3 00:21:49.477 Malloc4 00:21:49.477 Malloc5 00:21:49.735 Malloc6 00:21:49.735 Malloc7 00:21:49.735 Malloc8 00:21:49.735 Malloc9 00:21:49.735 Malloc10 00:21:49.735 16:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.735 16:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:49.735 16:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.735 16:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.994 16:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2402580 00:21:49.994 16:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:49.994 16:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:49.994 [2024-10-17 16:50:03.485758] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:55.277 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.277 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2402437 00:21:55.277 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2402437 ']' 00:21:55.277 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2402437 00:21:55.277 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:21:55.277 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:55.277 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2402437 00:21:55.277 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:55.277 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:55.277 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2402437' 00:21:55.277 killing process with pid 2402437 00:21:55.277 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 2402437 00:21:55.277 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 2402437 00:21:55.277 [2024-10-17 16:50:08.484709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa6b0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.484813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa6b0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.484833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa6b0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.484867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa6b0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.484881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa6b0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.484892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa6b0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.485636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468250 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.485673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468250 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.485690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468250 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.485703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468250 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.485747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468250 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.486657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468740 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.486695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468740 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.486712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468740 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.486725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468740 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.486737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468740 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.486749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468740 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.487469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa1e0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.487502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa1e0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.487518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa1e0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.487539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa1e0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.487553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa1e0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.487565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa1e0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.487577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa1e0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.487589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fa1e0 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.504133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455240 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.504216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455240 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.504232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455240 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.504246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455240 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.504258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455240 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.504270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455240 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.504301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455240 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.504314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455240 is same with the state(6) to be set 00:21:55.277 [2024-10-17 16:50:08.504326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455240 is same with the state(6) to be set 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 starting I/O failed: -6 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 starting I/O failed: -6 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 starting I/O failed: -6 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 starting I/O failed: -6 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 starting I/O failed: -6 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 starting I/O failed: -6 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 [2024-10-17 16:50:08.504802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455710 is same with the state(6) to be set 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 [2024-10-17 16:50:08.504837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455710 is same with the state(6) to be set 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 [2024-10-17 16:50:08.504852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455710 is same with the state(6) to be set 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 [2024-10-17 16:50:08.504865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455710 is same with the state(6) to be set 00:21:55.277 starting I/O failed: -6 00:21:55.277 [2024-10-17 16:50:08.504878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455710 is same with the state(6) to be set 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 [2024-10-17 16:50:08.504890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455710 is same with the state(6) to be set 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 starting I/O failed: -6 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 starting I/O failed: -6 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 starting I/O failed: -6 00:21:55.277 [2024-10-17 16:50:08.505187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 starting I/O failed: -6 00:21:55.277 Write completed with error (sct=0, sc=8) 00:21:55.277 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 [2024-10-17 16:50:08.505545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455be0 is same with the state(6) to be set 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 [2024-10-17 16:50:08.505589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455be0 is same with Write completed with error (sct=0, sc=8) 00:21:55.278 the state(6) to be set 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 [2024-10-17 16:50:08.505955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454d70 is same with the state(6) to be set 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 [2024-10-17 16:50:08.505987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454d70 is same with the state(6) to be set 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 [2024-10-17 16:50:08.506011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454d70 is same with the state(6) to be set 00:21:55.278 [2024-10-17 16:50:08.506025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454d70 is same with Write completed with error (sct=0, sc=8) 00:21:55.278 the state(6) to be set 00:21:55.278 [2024-10-17 16:50:08.506039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454d70 is same with the state(6) to be set 00:21:55.278 [2024-10-17 16:50:08.506051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454d70 is same with the state(6) to be set 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 [2024-10-17 16:50:08.506062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454d70 is same with starting I/O failed: -6 00:21:55.278 the state(6) to be set 00:21:55.278 [2024-10-17 16:50:08.506076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454d70 is same with the state(6) to be set 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 [2024-10-17 16:50:08.506087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454d70 is same with the state(6) to be set 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 [2024-10-17 16:50:08.506099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1454d70 is same with the state(6) to be set 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 [2024-10-17 16:50:08.506379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 Write completed with error (sct=0, sc=8) 00:21:55.278 starting I/O failed: -6 00:21:55.278 [2024-10-17 16:50:08.507584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.278 [2024-10-17 16:50:08.507609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453560 is same with the state(6) to be set 00:21:55.278 [2024-10-17 16:50:08.507637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453560 is same with the state(6) to be set 00:21:55.278 [2024-10-17 16:50:08.507651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453560 is same with the state(6) to be set 00:21:55.279 [2024-10-17 16:50:08.507664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453560 is same with the state(6) to be set 00:21:55.279 [2024-10-17 16:50:08.507676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453560 is same with the state(6) to be set 00:21:55.279 [2024-10-17 16:50:08.507687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453560 is same with the state(6) to be set 00:21:55.279 [2024-10-17 16:50:08.507699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453560 is same with the state(6) to be set 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 [2024-10-17 16:50:08.507711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453560 is same with the state(6) to be set 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 [2024-10-17 16:50:08.509270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.279 NVMe io qpair process completion error 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 starting I/O failed: -6 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 [2024-10-17 16:50:08.510596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.279 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 [2024-10-17 16:50:08.511549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 [2024-10-17 16:50:08.512788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.280 starting I/O failed: -6 00:21:55.280 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 [2024-10-17 16:50:08.515088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.281 NVMe io qpair process completion error 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.281 starting I/O failed: -6 00:21:55.281 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 [2024-10-17 16:50:08.517109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 [2024-10-17 16:50:08.518464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.282 Write completed with error (sct=0, sc=8) 00:21:55.282 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 [2024-10-17 16:50:08.520143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.283 NVMe io qpair process completion error 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 [2024-10-17 16:50:08.521514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 starting I/O failed: -6 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 Write completed with error (sct=0, sc=8) 00:21:55.283 [2024-10-17 16:50:08.522612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 [2024-10-17 16:50:08.523739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.284 Write completed with error (sct=0, sc=8) 00:21:55.284 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 [2024-10-17 16:50:08.525514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.285 NVMe io qpair process completion error 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 [2024-10-17 16:50:08.526872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 [2024-10-17 16:50:08.527891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.285 starting I/O failed: -6 00:21:55.285 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 [2024-10-17 16:50:08.529071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 [2024-10-17 16:50:08.531692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.286 NVMe io qpair process completion error 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.286 starting I/O failed: -6 00:21:55.286 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 [2024-10-17 16:50:08.532948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 [2024-10-17 16:50:08.534066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 [2024-10-17 16:50:08.535244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.287 starting I/O failed: -6 00:21:55.287 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 [2024-10-17 16:50:08.538928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.288 NVMe io qpair process completion error 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 [2024-10-17 16:50:08.540160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.288 starting I/O failed: -6 00:21:55.288 starting I/O failed: -6 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 Write completed with error (sct=0, sc=8) 00:21:55.288 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 [2024-10-17 16:50:08.541307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 [2024-10-17 16:50:08.542483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.289 starting I/O failed: -6 00:21:55.289 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 [2024-10-17 16:50:08.546051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.290 NVMe io qpair process completion error 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 [2024-10-17 16:50:08.547404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 starting I/O failed: -6 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.290 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 [2024-10-17 16:50:08.548522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 [2024-10-17 16:50:08.549663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.291 starting I/O failed: -6 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.291 starting I/O failed: -6 00:21:55.291 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 [2024-10-17 16:50:08.551609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.292 NVMe io qpair process completion error 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 [2024-10-17 16:50:08.553714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.292 Write completed with error (sct=0, sc=8) 00:21:55.292 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 [2024-10-17 16:50:08.554987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 [2024-10-17 16:50:08.556688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.293 NVMe io qpair process completion error 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 starting I/O failed: -6 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.293 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 [2024-10-17 16:50:08.558105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 [2024-10-17 16:50:08.559165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.294 starting I/O failed: -6 00:21:55.294 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 [2024-10-17 16:50:08.560287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 Write completed with error (sct=0, sc=8) 00:21:55.295 starting I/O failed: -6 00:21:55.295 [2024-10-17 16:50:08.564584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.295 NVMe io qpair process completion error 00:21:55.295 Initializing NVMe Controllers 00:21:55.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:55.295 Controller IO queue size 128, less than required. 00:21:55.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:55.295 Controller IO queue size 128, less than required. 00:21:55.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:55.295 Controller IO queue size 128, less than required. 00:21:55.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:55.295 Controller IO queue size 128, less than required. 00:21:55.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:55.295 Controller IO queue size 128, less than required. 00:21:55.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:55.295 Controller IO queue size 128, less than required. 00:21:55.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:55.295 Controller IO queue size 128, less than required. 00:21:55.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:55.295 Controller IO queue size 128, less than required. 00:21:55.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:55.295 Controller IO queue size 128, less than required. 00:21:55.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:55.295 Controller IO queue size 128, less than required. 00:21:55.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:55.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:55.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:55.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:55.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:55.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:55.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:55.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:55.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:55.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:55.296 Initialization complete. Launching workers. 00:21:55.296 ======================================================== 00:21:55.296 Latency(us) 00:21:55.296 Device Information : IOPS MiB/s Average min max 00:21:55.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1745.51 75.00 73350.51 1248.73 120989.26 00:21:55.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1763.22 75.76 72643.12 969.65 122876.41 00:21:55.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1781.14 76.53 71943.25 923.09 120466.13 00:21:55.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1822.66 78.32 70338.98 1050.39 118223.31 00:21:55.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1788.57 76.85 71735.22 798.57 120808.04 00:21:55.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1778.30 76.41 72203.91 1154.08 137171.40 00:21:55.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1820.91 78.24 70540.71 1075.01 119405.56 00:21:55.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1816.76 78.06 70730.44 1099.74 141836.71 00:21:55.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1715.57 73.72 73984.56 1068.56 119992.34 00:21:55.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1820.47 78.22 69748.22 950.12 120350.89 00:21:55.296 ======================================================== 00:21:55.296 Total : 17853.11 767.13 71697.16 798.57 141836.71 00:21:55.296 00:21:55.296 [2024-10-17 16:50:08.569476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de07f0 is same with the state(6) to be set 00:21:55.296 [2024-10-17 16:50:08.569566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de5040 is same with the state(6) to be set 00:21:55.296 [2024-10-17 16:50:08.569625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de4d10 is same with the state(6) to be set 00:21:55.296 [2024-10-17 16:50:08.569681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dde780 is same with the state(6) to be set 00:21:55.296 [2024-10-17 16:50:08.569736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddede0 is same with the state(6) to be set 00:21:55.296 [2024-10-17 16:50:08.569792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de56a0 is same with the state(6) to be set 00:21:55.296 [2024-10-17 16:50:08.569847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddeab0 is same with the state(6) to be set 00:21:55.296 [2024-10-17 16:50:08.569901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de5370 is same with the state(6) to be set 00:21:55.296 [2024-10-17 16:50:08.569956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de0bb0 is same with the state(6) to be set 00:21:55.296 [2024-10-17 16:50:08.570020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de09d0 is same with the state(6) to be set 00:21:55.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:55.555 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2402580 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2402580 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2402580 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:56.492 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:56.492 rmmod nvme_tcp 00:21:56.492 rmmod nvme_fabrics 00:21:56.492 rmmod nvme_keyring 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 2402437 ']' 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 2402437 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2402437 ']' 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2402437 00:21:56.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2402437) - No such process 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 2402437 is not found' 00:21:56.492 Process with pid 2402437 is not found 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.492 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.026 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.026 00:21:59.026 real 0m9.741s 00:21:59.026 user 0m23.705s 00:21:59.026 sys 0m5.696s 00:21:59.026 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.026 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:59.026 ************************************ 00:21:59.026 END TEST nvmf_shutdown_tc4 00:21:59.026 ************************************ 00:21:59.026 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:59.026 00:21:59.026 real 0m37.444s 00:21:59.026 user 1m42.535s 00:21:59.026 sys 0m11.921s 00:21:59.026 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.026 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:59.026 ************************************ 00:21:59.026 END TEST nvmf_shutdown 00:21:59.026 ************************************ 00:21:59.026 16:50:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:21:59.026 00:21:59.026 real 11m39.973s 00:21:59.026 user 27m58.986s 00:21:59.026 sys 2m42.744s 00:21:59.026 16:50:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.026 16:50:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:59.026 ************************************ 00:21:59.026 END TEST nvmf_target_extra 00:21:59.026 ************************************ 00:21:59.026 16:50:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:59.026 16:50:12 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:59.026 16:50:12 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:59.026 16:50:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.026 ************************************ 00:21:59.026 START TEST nvmf_host 00:21:59.026 ************************************ 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:59.026 * Looking for test storage... 00:21:59.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:59.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.026 --rc genhtml_branch_coverage=1 00:21:59.026 --rc genhtml_function_coverage=1 00:21:59.026 --rc genhtml_legend=1 00:21:59.026 --rc geninfo_all_blocks=1 00:21:59.026 --rc geninfo_unexecuted_blocks=1 00:21:59.026 00:21:59.026 ' 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:59.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.026 --rc genhtml_branch_coverage=1 00:21:59.026 --rc genhtml_function_coverage=1 00:21:59.026 --rc genhtml_legend=1 00:21:59.026 --rc geninfo_all_blocks=1 00:21:59.026 --rc geninfo_unexecuted_blocks=1 00:21:59.026 00:21:59.026 ' 00:21:59.026 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:59.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.026 --rc genhtml_branch_coverage=1 00:21:59.026 --rc genhtml_function_coverage=1 00:21:59.026 --rc genhtml_legend=1 00:21:59.026 --rc geninfo_all_blocks=1 00:21:59.026 --rc geninfo_unexecuted_blocks=1 00:21:59.026 00:21:59.026 ' 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:59.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.027 --rc genhtml_branch_coverage=1 00:21:59.027 --rc genhtml_function_coverage=1 00:21:59.027 --rc genhtml_legend=1 00:21:59.027 --rc geninfo_all_blocks=1 00:21:59.027 --rc geninfo_unexecuted_blocks=1 00:21:59.027 00:21:59.027 ' 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.027 ************************************ 00:21:59.027 START TEST nvmf_multicontroller 00:21:59.027 ************************************ 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:59.027 * Looking for test storage... 00:21:59.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.027 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:59.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.028 --rc genhtml_branch_coverage=1 00:21:59.028 --rc genhtml_function_coverage=1 00:21:59.028 --rc genhtml_legend=1 00:21:59.028 --rc geninfo_all_blocks=1 00:21:59.028 --rc geninfo_unexecuted_blocks=1 00:21:59.028 00:21:59.028 ' 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:59.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.028 --rc genhtml_branch_coverage=1 00:21:59.028 --rc genhtml_function_coverage=1 00:21:59.028 --rc genhtml_legend=1 00:21:59.028 --rc geninfo_all_blocks=1 00:21:59.028 --rc geninfo_unexecuted_blocks=1 00:21:59.028 00:21:59.028 ' 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:59.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.028 --rc genhtml_branch_coverage=1 00:21:59.028 --rc genhtml_function_coverage=1 00:21:59.028 --rc genhtml_legend=1 00:21:59.028 --rc geninfo_all_blocks=1 00:21:59.028 --rc geninfo_unexecuted_blocks=1 00:21:59.028 00:21:59.028 ' 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:59.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.028 --rc genhtml_branch_coverage=1 00:21:59.028 --rc genhtml_function_coverage=1 00:21:59.028 --rc genhtml_legend=1 00:21:59.028 --rc geninfo_all_blocks=1 00:21:59.028 --rc geninfo_unexecuted_blocks=1 00:21:59.028 00:21:59.028 ' 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.028 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.029 16:50:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.931 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:00.932 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:00.932 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:00.932 Found net devices under 0000:09:00.0: cvl_0_0 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:00.932 Found net devices under 0000:09:00.1: cvl_0_1 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.932 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:22:01.191 00:22:01.191 --- 10.0.0.2 ping statistics --- 00:22:01.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.191 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:22:01.191 00:22:01.191 --- 10.0.0.1 ping statistics --- 00:22:01.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.191 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=2405310 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 2405310 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2405310 ']' 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.191 16:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.191 [2024-10-17 16:50:14.804019] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:22:01.192 [2024-10-17 16:50:14.804099] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.192 [2024-10-17 16:50:14.875107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:01.450 [2024-10-17 16:50:14.935499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.450 [2024-10-17 16:50:14.935573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.450 [2024-10-17 16:50:14.935600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.450 [2024-10-17 16:50:14.935612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.450 [2024-10-17 16:50:14.935621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.450 [2024-10-17 16:50:14.937269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.450 [2024-10-17 16:50:14.937341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.450 [2024-10-17 16:50:14.937344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.450 [2024-10-17 16:50:15.090386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.450 Malloc0 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.450 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.709 [2024-10-17 16:50:15.150950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.709 [2024-10-17 16:50:15.158825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.709 Malloc1 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2405433 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2405433 /var/tmp/bdevperf.sock 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2405433 ']' 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.709 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.710 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.710 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.969 NVMe0n1 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.969 1 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.969 request: 00:22:01.969 { 00:22:01.969 "name": "NVMe0", 00:22:01.969 "trtype": "tcp", 00:22:01.969 "traddr": "10.0.0.2", 00:22:01.969 "adrfam": "ipv4", 00:22:01.969 "trsvcid": "4420", 00:22:01.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.969 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:01.969 "hostaddr": "10.0.0.1", 00:22:01.969 "prchk_reftag": false, 00:22:01.969 "prchk_guard": false, 00:22:01.969 "hdgst": false, 00:22:01.969 "ddgst": false, 00:22:01.969 "allow_unrecognized_csi": false, 00:22:01.969 "method": "bdev_nvme_attach_controller", 00:22:01.969 "req_id": 1 00:22:01.969 } 00:22:01.969 Got JSON-RPC error response 00:22:01.969 response: 00:22:01.969 { 00:22:01.969 "code": -114, 00:22:01.969 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:01.969 } 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.969 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.969 request: 00:22:01.969 { 00:22:01.969 "name": "NVMe0", 00:22:01.969 "trtype": "tcp", 00:22:01.969 "traddr": "10.0.0.2", 00:22:01.969 "adrfam": "ipv4", 00:22:01.969 "trsvcid": "4420", 00:22:01.969 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:01.969 "hostaddr": "10.0.0.1", 00:22:01.969 "prchk_reftag": false, 00:22:01.969 "prchk_guard": false, 00:22:01.970 "hdgst": false, 00:22:01.970 "ddgst": false, 00:22:01.970 "allow_unrecognized_csi": false, 00:22:01.970 "method": "bdev_nvme_attach_controller", 00:22:01.970 "req_id": 1 00:22:01.970 } 00:22:01.970 Got JSON-RPC error response 00:22:01.970 response: 00:22:01.970 { 00:22:01.970 "code": -114, 00:22:01.970 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:01.970 } 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.970 request: 00:22:01.970 { 00:22:01.970 "name": "NVMe0", 00:22:01.970 "trtype": "tcp", 00:22:01.970 "traddr": "10.0.0.2", 00:22:01.970 "adrfam": "ipv4", 00:22:01.970 "trsvcid": "4420", 00:22:01.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.970 "hostaddr": "10.0.0.1", 00:22:01.970 "prchk_reftag": false, 00:22:01.970 "prchk_guard": false, 00:22:01.970 "hdgst": false, 00:22:01.970 "ddgst": false, 00:22:01.970 "multipath": "disable", 00:22:01.970 "allow_unrecognized_csi": false, 00:22:01.970 "method": "bdev_nvme_attach_controller", 00:22:01.970 "req_id": 1 00:22:01.970 } 00:22:01.970 Got JSON-RPC error response 00:22:01.970 response: 00:22:01.970 { 00:22:01.970 "code": -114, 00:22:01.970 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:01.970 } 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.970 request: 00:22:01.970 { 00:22:01.970 "name": "NVMe0", 00:22:01.970 "trtype": "tcp", 00:22:01.970 "traddr": "10.0.0.2", 00:22:01.970 "adrfam": "ipv4", 00:22:01.970 "trsvcid": "4420", 00:22:01.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.970 "hostaddr": "10.0.0.1", 00:22:01.970 "prchk_reftag": false, 00:22:01.970 "prchk_guard": false, 00:22:01.970 "hdgst": false, 00:22:01.970 "ddgst": false, 00:22:01.970 "multipath": "failover", 00:22:01.970 "allow_unrecognized_csi": false, 00:22:01.970 "method": "bdev_nvme_attach_controller", 00:22:01.970 "req_id": 1 00:22:01.970 } 00:22:01.970 Got JSON-RPC error response 00:22:01.970 response: 00:22:01.970 { 00:22:01.970 "code": -114, 00:22:01.970 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:01.970 } 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.970 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.228 NVMe0n1 00:22:02.228 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.228 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:02.228 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.228 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.228 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.228 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:02.228 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.228 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.487 00:22:02.487 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.487 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:02.487 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:02.487 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.487 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.487 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.487 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:02.487 16:50:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:03.421 { 00:22:03.421 "results": [ 00:22:03.421 { 00:22:03.421 "job": "NVMe0n1", 00:22:03.421 "core_mask": "0x1", 00:22:03.421 "workload": "write", 00:22:03.421 "status": "finished", 00:22:03.421 "queue_depth": 128, 00:22:03.421 "io_size": 4096, 00:22:03.421 "runtime": 1.006722, 00:22:03.421 "iops": 18493.685446429103, 00:22:03.421 "mibps": 72.24095877511368, 00:22:03.421 "io_failed": 0, 00:22:03.421 "io_timeout": 0, 00:22:03.421 "avg_latency_us": 6910.674568537814, 00:22:03.421 "min_latency_us": 2014.6251851851853, 00:22:03.421 "max_latency_us": 12330.477037037037 00:22:03.421 } 00:22:03.421 ], 00:22:03.421 "core_count": 1 00:22:03.421 } 00:22:03.421 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:03.421 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.421 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.421 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.421 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:03.421 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2405433 00:22:03.421 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2405433 ']' 00:22:03.421 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2405433 00:22:03.421 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:03.421 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.422 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2405433 00:22:03.686 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:03.686 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:03.686 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2405433' 00:22:03.686 killing process with pid 2405433 00:22:03.686 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2405433 00:22:03.686 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2405433 00:22:03.686 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:03.686 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.686 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.686 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.686 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:03.686 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:22:03.687 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:03.687 [2024-10-17 16:50:15.265748] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:22:03.687 [2024-10-17 16:50:15.265848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405433 ] 00:22:03.687 [2024-10-17 16:50:15.323504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.687 [2024-10-17 16:50:15.382156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.687 [2024-10-17 16:50:15.930021] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 0a00649d-5702-4358-ba48-e890676d9891 already exists 00:22:03.687 [2024-10-17 16:50:15.930059] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:0a00649d-5702-4358-ba48-e890676d9891 alias for bdev NVMe1n1 00:22:03.687 [2024-10-17 16:50:15.930090] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:03.687 Running I/O for 1 seconds... 00:22:03.687 18490.00 IOPS, 72.23 MiB/s 00:22:03.687 Latency(us) 00:22:03.687 [2024-10-17T14:50:17.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.687 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:03.687 NVMe0n1 : 1.01 18493.69 72.24 0.00 0.00 6910.67 2014.63 12330.48 00:22:03.687 [2024-10-17T14:50:17.377Z] =================================================================================================================== 00:22:03.687 [2024-10-17T14:50:17.377Z] Total : 18493.69 72.24 0.00 0.00 6910.67 2014.63 12330.48 00:22:03.687 Received shutdown signal, test time was about 1.000000 seconds 00:22:03.687 00:22:03.687 Latency(us) 00:22:03.687 [2024-10-17T14:50:17.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.687 [2024-10-17T14:50:17.377Z] =================================================================================================================== 00:22:03.687 [2024-10-17T14:50:17.377Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.687 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:03.687 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:03.946 rmmod nvme_tcp 00:22:03.946 rmmod nvme_fabrics 00:22:03.946 rmmod nvme_keyring 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 2405310 ']' 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 2405310 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2405310 ']' 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2405310 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2405310 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2405310' 00:22:03.946 killing process with pid 2405310 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2405310 00:22:03.946 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2405310 00:22:04.206 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:04.206 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:04.206 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:04.206 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:04.206 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:22:04.206 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:04.206 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:22:04.206 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.206 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.206 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.206 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.206 16:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.109 16:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:06.109 00:22:06.109 real 0m7.380s 00:22:06.109 user 0m11.182s 00:22:06.109 sys 0m2.382s 00:22:06.109 16:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:06.109 16:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:06.109 ************************************ 00:22:06.109 END TEST nvmf_multicontroller 00:22:06.109 ************************************ 00:22:06.109 16:50:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:06.109 16:50:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:06.109 16:50:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:06.109 16:50:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.369 ************************************ 00:22:06.369 START TEST nvmf_aer 00:22:06.369 ************************************ 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:06.369 * Looking for test storage... 00:22:06.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:06.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.369 --rc genhtml_branch_coverage=1 00:22:06.369 --rc genhtml_function_coverage=1 00:22:06.369 --rc genhtml_legend=1 00:22:06.369 --rc geninfo_all_blocks=1 00:22:06.369 --rc geninfo_unexecuted_blocks=1 00:22:06.369 00:22:06.369 ' 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:06.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.369 --rc genhtml_branch_coverage=1 00:22:06.369 --rc genhtml_function_coverage=1 00:22:06.369 --rc genhtml_legend=1 00:22:06.369 --rc geninfo_all_blocks=1 00:22:06.369 --rc geninfo_unexecuted_blocks=1 00:22:06.369 00:22:06.369 ' 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:06.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.369 --rc genhtml_branch_coverage=1 00:22:06.369 --rc genhtml_function_coverage=1 00:22:06.369 --rc genhtml_legend=1 00:22:06.369 --rc geninfo_all_blocks=1 00:22:06.369 --rc geninfo_unexecuted_blocks=1 00:22:06.369 00:22:06.369 ' 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:06.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.369 --rc genhtml_branch_coverage=1 00:22:06.369 --rc genhtml_function_coverage=1 00:22:06.369 --rc genhtml_legend=1 00:22:06.369 --rc geninfo_all_blocks=1 00:22:06.369 --rc geninfo_unexecuted_blocks=1 00:22:06.369 00:22:06.369 ' 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.369 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:06.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:06.370 16:50:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:08.272 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:08.272 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:08.272 Found net devices under 0000:09:00.0: cvl_0_0 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.272 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:08.273 Found net devices under 0000:09:00.1: cvl_0_1 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.273 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.532 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.532 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.532 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.532 16:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:08.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:22:08.532 00:22:08.532 --- 10.0.0.2 ping statistics --- 00:22:08.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.532 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:22:08.532 00:22:08.532 --- 10.0.0.1 ping statistics --- 00:22:08.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.532 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=2407648 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 2407648 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2407648 ']' 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:08.532 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.532 [2024-10-17 16:50:22.117051] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:22:08.532 [2024-10-17 16:50:22.117127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.532 [2024-10-17 16:50:22.184500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.790 [2024-10-17 16:50:22.248147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.790 [2024-10-17 16:50:22.248216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.790 [2024-10-17 16:50:22.248232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.790 [2024-10-17 16:50:22.248246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.790 [2024-10-17 16:50:22.248257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.790 [2024-10-17 16:50:22.249889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.790 [2024-10-17 16:50:22.249919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.790 [2024-10-17 16:50:22.250042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.790 [2024-10-17 16:50:22.250046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.790 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:08.790 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:22:08.790 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:08.790 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:08.790 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.790 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.790 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:08.790 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.790 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.790 [2024-10-17 16:50:22.395996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.790 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.791 Malloc0 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.791 [2024-10-17 16:50:22.466576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.791 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.791 [ 00:22:08.791 { 00:22:08.791 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:08.791 "subtype": "Discovery", 00:22:08.791 "listen_addresses": [], 00:22:08.791 "allow_any_host": true, 00:22:08.791 "hosts": [] 00:22:08.791 }, 00:22:08.791 { 00:22:08.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.791 "subtype": "NVMe", 00:22:08.791 "listen_addresses": [ 00:22:08.791 { 00:22:08.791 "trtype": "TCP", 00:22:08.791 "adrfam": "IPv4", 00:22:08.791 "traddr": "10.0.0.2", 00:22:08.791 "trsvcid": "4420" 00:22:08.791 } 00:22:08.791 ], 00:22:08.791 "allow_any_host": true, 00:22:08.791 "hosts": [], 00:22:08.791 "serial_number": "SPDK00000000000001", 00:22:08.791 "model_number": "SPDK bdev Controller", 00:22:08.791 "max_namespaces": 2, 00:22:08.791 "min_cntlid": 1, 00:22:08.791 "max_cntlid": 65519, 00:22:08.791 "namespaces": [ 00:22:08.791 { 00:22:08.791 "nsid": 1, 00:22:08.791 "bdev_name": "Malloc0", 00:22:08.791 "name": "Malloc0", 00:22:09.049 "nguid": "38C9412B35A24682A7994B7A4F22D33C", 00:22:09.049 "uuid": "38c9412b-35a2-4682-a799-4b7a4f22d33c" 00:22:09.049 } 00:22:09.049 ] 00:22:09.049 } 00:22:09.049 ] 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2407676 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:22:09.049 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:09.308 Malloc1 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:09.308 [ 00:22:09.308 { 00:22:09.308 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:09.308 "subtype": "Discovery", 00:22:09.308 "listen_addresses": [], 00:22:09.308 "allow_any_host": true, 00:22:09.308 "hosts": [] 00:22:09.308 }, 00:22:09.308 { 00:22:09.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.308 "subtype": "NVMe", 00:22:09.308 "listen_addresses": [ 00:22:09.308 { 00:22:09.308 "trtype": "TCP", 00:22:09.308 "adrfam": "IPv4", 00:22:09.308 "traddr": "10.0.0.2", 00:22:09.308 "trsvcid": "4420" 00:22:09.308 } 00:22:09.308 ], 00:22:09.308 "allow_any_host": true, 00:22:09.308 "hosts": [], 00:22:09.308 "serial_number": "SPDK00000000000001", 00:22:09.308 "model_number": "SPDK bdev Controller", 00:22:09.308 "max_namespaces": 2, 00:22:09.308 "min_cntlid": 1, 00:22:09.308 "max_cntlid": 65519, 00:22:09.308 "namespaces": [ 00:22:09.308 { 00:22:09.308 "nsid": 1, 00:22:09.308 "bdev_name": "Malloc0", 00:22:09.308 "name": "Malloc0", 00:22:09.308 "nguid": "38C9412B35A24682A7994B7A4F22D33C", 00:22:09.308 "uuid": "38c9412b-35a2-4682-a799-4b7a4f22d33c" 00:22:09.308 }, 00:22:09.308 { 00:22:09.308 "nsid": 2, 00:22:09.308 "bdev_name": "Malloc1", 00:22:09.308 "name": "Malloc1", 00:22:09.308 "nguid": "8741C68E66154BBA8890EF816D62F85A", 00:22:09.308 "uuid": "8741c68e-6615-4bba-8890-ef816d62f85a" 00:22:09.308 } 00:22:09.308 ] 00:22:09.308 } 00:22:09.308 ] 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2407676 00:22:09.308 Asynchronous Event Request test 00:22:09.308 Attaching to 10.0.0.2 00:22:09.308 Attached to 10.0.0.2 00:22:09.308 Registering asynchronous event callbacks... 00:22:09.308 Starting namespace attribute notice tests for all controllers... 00:22:09.308 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:09.308 aer_cb - Changed Namespace 00:22:09.308 Cleaning up... 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:09.308 rmmod nvme_tcp 00:22:09.308 rmmod nvme_fabrics 00:22:09.308 rmmod nvme_keyring 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 2407648 ']' 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 2407648 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2407648 ']' 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2407648 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:09.308 16:50:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2407648 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2407648' 00:22:09.568 killing process with pid 2407648 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2407648 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2407648 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.568 16:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:12.104 00:22:12.104 real 0m5.475s 00:22:12.104 user 0m4.616s 00:22:12.104 sys 0m1.905s 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.104 ************************************ 00:22:12.104 END TEST nvmf_aer 00:22:12.104 ************************************ 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.104 ************************************ 00:22:12.104 START TEST nvmf_async_init 00:22:12.104 ************************************ 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:12.104 * Looking for test storage... 00:22:12.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:12.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.104 --rc genhtml_branch_coverage=1 00:22:12.104 --rc genhtml_function_coverage=1 00:22:12.104 --rc genhtml_legend=1 00:22:12.104 --rc geninfo_all_blocks=1 00:22:12.104 --rc geninfo_unexecuted_blocks=1 00:22:12.104 00:22:12.104 ' 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:12.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.104 --rc genhtml_branch_coverage=1 00:22:12.104 --rc genhtml_function_coverage=1 00:22:12.104 --rc genhtml_legend=1 00:22:12.104 --rc geninfo_all_blocks=1 00:22:12.104 --rc geninfo_unexecuted_blocks=1 00:22:12.104 00:22:12.104 ' 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:12.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.104 --rc genhtml_branch_coverage=1 00:22:12.104 --rc genhtml_function_coverage=1 00:22:12.104 --rc genhtml_legend=1 00:22:12.104 --rc geninfo_all_blocks=1 00:22:12.104 --rc geninfo_unexecuted_blocks=1 00:22:12.104 00:22:12.104 ' 00:22:12.104 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:12.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.104 --rc genhtml_branch_coverage=1 00:22:12.104 --rc genhtml_function_coverage=1 00:22:12.104 --rc genhtml_legend=1 00:22:12.104 --rc geninfo_all_blocks=1 00:22:12.104 --rc geninfo_unexecuted_blocks=1 00:22:12.104 00:22:12.105 ' 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:12.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e316f82e955548e493ae6f5576eb592b 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:12.105 16:50:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:14.013 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:14.014 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:14.014 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:14.014 Found net devices under 0000:09:00.0: cvl_0_0 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:14.014 Found net devices under 0000:09:00.1: cvl_0_1 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.014 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:14.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:22:14.273 00:22:14.273 --- 10.0.0.2 ping statistics --- 00:22:14.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.273 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:22:14.273 00:22:14.273 --- 10.0.0.1 ping statistics --- 00:22:14.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.273 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=2409740 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 2409740 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2409740 ']' 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:14.273 16:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.273 [2024-10-17 16:50:27.787443] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:22:14.273 [2024-10-17 16:50:27.787516] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.273 [2024-10-17 16:50:27.848846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.273 [2024-10-17 16:50:27.906264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.273 [2024-10-17 16:50:27.906333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.273 [2024-10-17 16:50:27.906360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.273 [2024-10-17 16:50:27.906371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.273 [2024-10-17 16:50:27.906380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.273 [2024-10-17 16:50:27.906957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.532 [2024-10-17 16:50:28.045102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.532 null0 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e316f82e955548e493ae6f5576eb592b 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.532 [2024-10-17 16:50:28.085324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.532 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.790 nvme0n1 00:22:14.790 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.790 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:14.790 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.790 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.790 [ 00:22:14.790 { 00:22:14.790 "name": "nvme0n1", 00:22:14.790 "aliases": [ 00:22:14.790 "e316f82e-9555-48e4-93ae-6f5576eb592b" 00:22:14.790 ], 00:22:14.790 "product_name": "NVMe disk", 00:22:14.790 "block_size": 512, 00:22:14.790 "num_blocks": 2097152, 00:22:14.790 "uuid": "e316f82e-9555-48e4-93ae-6f5576eb592b", 00:22:14.790 "numa_id": 0, 00:22:14.790 "assigned_rate_limits": { 00:22:14.790 "rw_ios_per_sec": 0, 00:22:14.790 "rw_mbytes_per_sec": 0, 00:22:14.790 "r_mbytes_per_sec": 0, 00:22:14.790 "w_mbytes_per_sec": 0 00:22:14.790 }, 00:22:14.790 "claimed": false, 00:22:14.790 "zoned": false, 00:22:14.790 "supported_io_types": { 00:22:14.790 "read": true, 00:22:14.790 "write": true, 00:22:14.790 "unmap": false, 00:22:14.790 "flush": true, 00:22:14.790 "reset": true, 00:22:14.790 "nvme_admin": true, 00:22:14.790 "nvme_io": true, 00:22:14.790 "nvme_io_md": false, 00:22:14.790 "write_zeroes": true, 00:22:14.790 "zcopy": false, 00:22:14.790 "get_zone_info": false, 00:22:14.790 "zone_management": false, 00:22:14.790 "zone_append": false, 00:22:14.790 "compare": true, 00:22:14.790 "compare_and_write": true, 00:22:14.790 "abort": true, 00:22:14.790 "seek_hole": false, 00:22:14.790 "seek_data": false, 00:22:14.790 "copy": true, 00:22:14.790 "nvme_iov_md": false 00:22:14.790 }, 00:22:14.790 "memory_domains": [ 00:22:14.790 { 00:22:14.790 "dma_device_id": "system", 00:22:14.790 "dma_device_type": 1 00:22:14.790 } 00:22:14.790 ], 00:22:14.790 "driver_specific": { 00:22:14.790 "nvme": [ 00:22:14.790 { 00:22:14.790 "trid": { 00:22:14.790 "trtype": "TCP", 00:22:14.790 "adrfam": "IPv4", 00:22:14.790 "traddr": "10.0.0.2", 00:22:14.790 "trsvcid": "4420", 00:22:14.790 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:14.790 }, 00:22:14.790 "ctrlr_data": { 00:22:14.790 "cntlid": 1, 00:22:14.790 "vendor_id": "0x8086", 00:22:14.790 "model_number": "SPDK bdev Controller", 00:22:14.790 "serial_number": "00000000000000000000", 00:22:14.790 "firmware_revision": "25.01", 00:22:14.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.790 "oacs": { 00:22:14.790 "security": 0, 00:22:14.790 "format": 0, 00:22:14.790 "firmware": 0, 00:22:14.790 "ns_manage": 0 00:22:14.790 }, 00:22:14.790 "multi_ctrlr": true, 00:22:14.790 "ana_reporting": false 00:22:14.790 }, 00:22:14.790 "vs": { 00:22:14.790 "nvme_version": "1.3" 00:22:14.790 }, 00:22:14.790 "ns_data": { 00:22:14.790 "id": 1, 00:22:14.790 "can_share": true 00:22:14.790 } 00:22:14.790 } 00:22:14.790 ], 00:22:14.790 "mp_policy": "active_passive" 00:22:14.790 } 00:22:14.790 } 00:22:14.790 ] 00:22:14.790 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.790 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:14.790 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.790 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.790 [2024-10-17 16:50:28.338541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:14.790 [2024-10-17 16:50:28.338636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd0be0 (9): Bad file descriptor 00:22:14.790 [2024-10-17 16:50:28.471180] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:14.790 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.791 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:14.791 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.791 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.049 [ 00:22:15.049 { 00:22:15.049 "name": "nvme0n1", 00:22:15.049 "aliases": [ 00:22:15.049 "e316f82e-9555-48e4-93ae-6f5576eb592b" 00:22:15.049 ], 00:22:15.049 "product_name": "NVMe disk", 00:22:15.049 "block_size": 512, 00:22:15.049 "num_blocks": 2097152, 00:22:15.049 "uuid": "e316f82e-9555-48e4-93ae-6f5576eb592b", 00:22:15.049 "numa_id": 0, 00:22:15.049 "assigned_rate_limits": { 00:22:15.049 "rw_ios_per_sec": 0, 00:22:15.049 "rw_mbytes_per_sec": 0, 00:22:15.049 "r_mbytes_per_sec": 0, 00:22:15.049 "w_mbytes_per_sec": 0 00:22:15.049 }, 00:22:15.049 "claimed": false, 00:22:15.049 "zoned": false, 00:22:15.049 "supported_io_types": { 00:22:15.049 "read": true, 00:22:15.049 "write": true, 00:22:15.049 "unmap": false, 00:22:15.049 "flush": true, 00:22:15.049 "reset": true, 00:22:15.049 "nvme_admin": true, 00:22:15.049 "nvme_io": true, 00:22:15.049 "nvme_io_md": false, 00:22:15.049 "write_zeroes": true, 00:22:15.049 "zcopy": false, 00:22:15.049 "get_zone_info": false, 00:22:15.049 "zone_management": false, 00:22:15.049 "zone_append": false, 00:22:15.049 "compare": true, 00:22:15.049 "compare_and_write": true, 00:22:15.049 "abort": true, 00:22:15.049 "seek_hole": false, 00:22:15.049 "seek_data": false, 00:22:15.049 "copy": true, 00:22:15.049 "nvme_iov_md": false 00:22:15.049 }, 00:22:15.049 "memory_domains": [ 00:22:15.049 { 00:22:15.049 "dma_device_id": "system", 00:22:15.049 "dma_device_type": 1 00:22:15.049 } 00:22:15.049 ], 00:22:15.049 "driver_specific": { 00:22:15.049 "nvme": [ 00:22:15.049 { 00:22:15.049 "trid": { 00:22:15.049 "trtype": "TCP", 00:22:15.049 "adrfam": "IPv4", 00:22:15.049 "traddr": "10.0.0.2", 00:22:15.049 "trsvcid": "4420", 00:22:15.049 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:15.049 }, 00:22:15.049 "ctrlr_data": { 00:22:15.049 "cntlid": 2, 00:22:15.049 "vendor_id": "0x8086", 00:22:15.049 "model_number": "SPDK bdev Controller", 00:22:15.049 "serial_number": "00000000000000000000", 00:22:15.049 "firmware_revision": "25.01", 00:22:15.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:15.049 "oacs": { 00:22:15.049 "security": 0, 00:22:15.049 "format": 0, 00:22:15.049 "firmware": 0, 00:22:15.049 "ns_manage": 0 00:22:15.049 }, 00:22:15.049 "multi_ctrlr": true, 00:22:15.049 "ana_reporting": false 00:22:15.049 }, 00:22:15.049 "vs": { 00:22:15.049 "nvme_version": "1.3" 00:22:15.049 }, 00:22:15.049 "ns_data": { 00:22:15.049 "id": 1, 00:22:15.049 "can_share": true 00:22:15.049 } 00:22:15.049 } 00:22:15.049 ], 00:22:15.049 "mp_policy": "active_passive" 00:22:15.049 } 00:22:15.049 } 00:22:15.049 ] 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.XyioOWZvn5 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.XyioOWZvn5 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.XyioOWZvn5 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.049 [2024-10-17 16:50:28.531313] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:15.049 [2024-10-17 16:50:28.531476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.049 [2024-10-17 16:50:28.547368] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:15.049 nvme0n1 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.049 [ 00:22:15.049 { 00:22:15.049 "name": "nvme0n1", 00:22:15.049 "aliases": [ 00:22:15.049 "e316f82e-9555-48e4-93ae-6f5576eb592b" 00:22:15.049 ], 00:22:15.049 "product_name": "NVMe disk", 00:22:15.049 "block_size": 512, 00:22:15.049 "num_blocks": 2097152, 00:22:15.049 "uuid": "e316f82e-9555-48e4-93ae-6f5576eb592b", 00:22:15.049 "numa_id": 0, 00:22:15.049 "assigned_rate_limits": { 00:22:15.049 "rw_ios_per_sec": 0, 00:22:15.049 "rw_mbytes_per_sec": 0, 00:22:15.049 "r_mbytes_per_sec": 0, 00:22:15.049 "w_mbytes_per_sec": 0 00:22:15.049 }, 00:22:15.049 "claimed": false, 00:22:15.049 "zoned": false, 00:22:15.049 "supported_io_types": { 00:22:15.049 "read": true, 00:22:15.049 "write": true, 00:22:15.049 "unmap": false, 00:22:15.049 "flush": true, 00:22:15.049 "reset": true, 00:22:15.049 "nvme_admin": true, 00:22:15.049 "nvme_io": true, 00:22:15.049 "nvme_io_md": false, 00:22:15.049 "write_zeroes": true, 00:22:15.049 "zcopy": false, 00:22:15.049 "get_zone_info": false, 00:22:15.049 "zone_management": false, 00:22:15.049 "zone_append": false, 00:22:15.049 "compare": true, 00:22:15.049 "compare_and_write": true, 00:22:15.049 "abort": true, 00:22:15.049 "seek_hole": false, 00:22:15.049 "seek_data": false, 00:22:15.049 "copy": true, 00:22:15.049 "nvme_iov_md": false 00:22:15.049 }, 00:22:15.049 "memory_domains": [ 00:22:15.049 { 00:22:15.049 "dma_device_id": "system", 00:22:15.049 "dma_device_type": 1 00:22:15.049 } 00:22:15.049 ], 00:22:15.049 "driver_specific": { 00:22:15.049 "nvme": [ 00:22:15.049 { 00:22:15.049 "trid": { 00:22:15.049 "trtype": "TCP", 00:22:15.049 "adrfam": "IPv4", 00:22:15.049 "traddr": "10.0.0.2", 00:22:15.049 "trsvcid": "4421", 00:22:15.049 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:15.049 }, 00:22:15.049 "ctrlr_data": { 00:22:15.049 "cntlid": 3, 00:22:15.049 "vendor_id": "0x8086", 00:22:15.049 "model_number": "SPDK bdev Controller", 00:22:15.049 "serial_number": "00000000000000000000", 00:22:15.049 "firmware_revision": "25.01", 00:22:15.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:15.049 "oacs": { 00:22:15.049 "security": 0, 00:22:15.049 "format": 0, 00:22:15.049 "firmware": 0, 00:22:15.049 "ns_manage": 0 00:22:15.049 }, 00:22:15.049 "multi_ctrlr": true, 00:22:15.049 "ana_reporting": false 00:22:15.049 }, 00:22:15.049 "vs": { 00:22:15.049 "nvme_version": "1.3" 00:22:15.049 }, 00:22:15.049 "ns_data": { 00:22:15.049 "id": 1, 00:22:15.049 "can_share": true 00:22:15.049 } 00:22:15.049 } 00:22:15.049 ], 00:22:15.049 "mp_policy": "active_passive" 00:22:15.049 } 00:22:15.049 } 00:22:15.049 ] 00:22:15.049 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.XyioOWZvn5 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:15.050 rmmod nvme_tcp 00:22:15.050 rmmod nvme_fabrics 00:22:15.050 rmmod nvme_keyring 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 2409740 ']' 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 2409740 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2409740 ']' 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2409740 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:15.050 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2409740 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2409740' 00:22:15.309 killing process with pid 2409740 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2409740 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2409740 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.309 16:50:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.846 00:22:17.846 real 0m5.701s 00:22:17.846 user 0m2.213s 00:22:17.846 sys 0m1.922s 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.846 ************************************ 00:22:17.846 END TEST nvmf_async_init 00:22:17.846 ************************************ 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.846 ************************************ 00:22:17.846 START TEST dma 00:22:17.846 ************************************ 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:17.846 * Looking for test storage... 00:22:17.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:17.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.846 --rc genhtml_branch_coverage=1 00:22:17.846 --rc genhtml_function_coverage=1 00:22:17.846 --rc genhtml_legend=1 00:22:17.846 --rc geninfo_all_blocks=1 00:22:17.846 --rc geninfo_unexecuted_blocks=1 00:22:17.846 00:22:17.846 ' 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:17.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.846 --rc genhtml_branch_coverage=1 00:22:17.846 --rc genhtml_function_coverage=1 00:22:17.846 --rc genhtml_legend=1 00:22:17.846 --rc geninfo_all_blocks=1 00:22:17.846 --rc geninfo_unexecuted_blocks=1 00:22:17.846 00:22:17.846 ' 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:17.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.846 --rc genhtml_branch_coverage=1 00:22:17.846 --rc genhtml_function_coverage=1 00:22:17.846 --rc genhtml_legend=1 00:22:17.846 --rc geninfo_all_blocks=1 00:22:17.846 --rc geninfo_unexecuted_blocks=1 00:22:17.846 00:22:17.846 ' 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:17.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.846 --rc genhtml_branch_coverage=1 00:22:17.846 --rc genhtml_function_coverage=1 00:22:17.846 --rc genhtml_legend=1 00:22:17.846 --rc geninfo_all_blocks=1 00:22:17.846 --rc geninfo_unexecuted_blocks=1 00:22:17.846 00:22:17.846 ' 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.846 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:17.847 00:22:17.847 real 0m0.166s 00:22:17.847 user 0m0.113s 00:22:17.847 sys 0m0.062s 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:17.847 ************************************ 00:22:17.847 END TEST dma 00:22:17.847 ************************************ 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.847 ************************************ 00:22:17.847 START TEST nvmf_identify 00:22:17.847 ************************************ 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:17.847 * Looking for test storage... 00:22:17.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:17.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.847 --rc genhtml_branch_coverage=1 00:22:17.847 --rc genhtml_function_coverage=1 00:22:17.847 --rc genhtml_legend=1 00:22:17.847 --rc geninfo_all_blocks=1 00:22:17.847 --rc geninfo_unexecuted_blocks=1 00:22:17.847 00:22:17.847 ' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:17.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.847 --rc genhtml_branch_coverage=1 00:22:17.847 --rc genhtml_function_coverage=1 00:22:17.847 --rc genhtml_legend=1 00:22:17.847 --rc geninfo_all_blocks=1 00:22:17.847 --rc geninfo_unexecuted_blocks=1 00:22:17.847 00:22:17.847 ' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:17.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.847 --rc genhtml_branch_coverage=1 00:22:17.847 --rc genhtml_function_coverage=1 00:22:17.847 --rc genhtml_legend=1 00:22:17.847 --rc geninfo_all_blocks=1 00:22:17.847 --rc geninfo_unexecuted_blocks=1 00:22:17.847 00:22:17.847 ' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:17.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.847 --rc genhtml_branch_coverage=1 00:22:17.847 --rc genhtml_function_coverage=1 00:22:17.847 --rc genhtml_legend=1 00:22:17.847 --rc geninfo_all_blocks=1 00:22:17.847 --rc geninfo_unexecuted_blocks=1 00:22:17.847 00:22:17.847 ' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.847 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.848 16:50:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.824 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:19.825 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:19.825 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:19.825 Found net devices under 0000:09:00.0: cvl_0_0 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:19.825 Found net devices under 0000:09:00.1: cvl_0_1 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.825 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:20.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:22:20.084 00:22:20.084 --- 10.0.0.2 ping statistics --- 00:22:20.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.084 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:20.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:22:20.084 00:22:20.084 --- 10.0.0.1 ping statistics --- 00:22:20.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.084 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2411887 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2411887 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2411887 ']' 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:20.084 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:20.084 [2024-10-17 16:50:33.715071] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:22:20.084 [2024-10-17 16:50:33.715170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.343 [2024-10-17 16:50:33.781419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:20.343 [2024-10-17 16:50:33.843199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.343 [2024-10-17 16:50:33.843257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.343 [2024-10-17 16:50:33.843271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.343 [2024-10-17 16:50:33.843281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.343 [2024-10-17 16:50:33.843291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.343 [2024-10-17 16:50:33.845022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.343 [2024-10-17 16:50:33.845078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.343 [2024-10-17 16:50:33.845123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:20.343 [2024-10-17 16:50:33.845126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.343 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.343 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:22:20.343 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.343 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.343 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:20.343 [2024-10-17 16:50:33.971889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.343 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.343 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:20.343 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:20.343 16:50:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:20.343 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:20.343 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.343 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:20.603 Malloc0 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:20.603 [2024-10-17 16:50:34.062541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:20.603 [ 00:22:20.603 { 00:22:20.603 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:20.603 "subtype": "Discovery", 00:22:20.603 "listen_addresses": [ 00:22:20.603 { 00:22:20.603 "trtype": "TCP", 00:22:20.603 "adrfam": "IPv4", 00:22:20.603 "traddr": "10.0.0.2", 00:22:20.603 "trsvcid": "4420" 00:22:20.603 } 00:22:20.603 ], 00:22:20.603 "allow_any_host": true, 00:22:20.603 "hosts": [] 00:22:20.603 }, 00:22:20.603 { 00:22:20.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.603 "subtype": "NVMe", 00:22:20.603 "listen_addresses": [ 00:22:20.603 { 00:22:20.603 "trtype": "TCP", 00:22:20.603 "adrfam": "IPv4", 00:22:20.603 "traddr": "10.0.0.2", 00:22:20.603 "trsvcid": "4420" 00:22:20.603 } 00:22:20.603 ], 00:22:20.603 "allow_any_host": true, 00:22:20.603 "hosts": [], 00:22:20.603 "serial_number": "SPDK00000000000001", 00:22:20.603 "model_number": "SPDK bdev Controller", 00:22:20.603 "max_namespaces": 32, 00:22:20.603 "min_cntlid": 1, 00:22:20.603 "max_cntlid": 65519, 00:22:20.603 "namespaces": [ 00:22:20.603 { 00:22:20.603 "nsid": 1, 00:22:20.603 "bdev_name": "Malloc0", 00:22:20.603 "name": "Malloc0", 00:22:20.603 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:20.603 "eui64": "ABCDEF0123456789", 00:22:20.603 "uuid": "0b74ab0b-4ba9-4588-be85-52999f175da2" 00:22:20.603 } 00:22:20.603 ] 00:22:20.603 } 00:22:20.603 ] 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.603 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:20.603 [2024-10-17 16:50:34.104923] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:22:20.603 [2024-10-17 16:50:34.104970] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411918 ] 00:22:20.603 [2024-10-17 16:50:34.136686] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:20.603 [2024-10-17 16:50:34.136750] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:20.603 [2024-10-17 16:50:34.136761] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:20.603 [2024-10-17 16:50:34.136778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:20.603 [2024-10-17 16:50:34.136792] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:20.603 [2024-10-17 16:50:34.140420] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:20.603 [2024-10-17 16:50:34.140487] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e35760 0 00:22:20.603 [2024-10-17 16:50:34.151017] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:20.603 [2024-10-17 16:50:34.151051] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:20.603 [2024-10-17 16:50:34.151061] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:20.603 [2024-10-17 16:50:34.151067] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:20.603 [2024-10-17 16:50:34.151123] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.603 [2024-10-17 16:50:34.151138] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.603 [2024-10-17 16:50:34.151145] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e35760) 00:22:20.604 [2024-10-17 16:50:34.151163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:20.604 [2024-10-17 16:50:34.151190] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95480, cid 0, qid 0 00:22:20.604 [2024-10-17 16:50:34.159032] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.604 [2024-10-17 16:50:34.159050] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.604 [2024-10-17 16:50:34.159058] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159065] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95480) on tqpair=0x1e35760 00:22:20.604 [2024-10-17 16:50:34.159102] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:20.604 [2024-10-17 16:50:34.159115] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:20.604 [2024-10-17 16:50:34.159125] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:20.604 [2024-10-17 16:50:34.159148] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159163] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e35760) 00:22:20.604 [2024-10-17 16:50:34.159174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.604 [2024-10-17 16:50:34.159199] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95480, cid 0, qid 0 00:22:20.604 [2024-10-17 16:50:34.159326] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.604 [2024-10-17 16:50:34.159339] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.604 [2024-10-17 16:50:34.159346] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159352] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95480) on tqpair=0x1e35760 00:22:20.604 [2024-10-17 16:50:34.159362] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:20.604 [2024-10-17 16:50:34.159375] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:20.604 [2024-10-17 16:50:34.159387] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159394] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159401] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e35760) 00:22:20.604 [2024-10-17 16:50:34.159411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.604 [2024-10-17 16:50:34.159432] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95480, cid 0, qid 0 00:22:20.604 [2024-10-17 16:50:34.159521] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.604 [2024-10-17 16:50:34.159535] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.604 [2024-10-17 16:50:34.159542] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159549] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95480) on tqpair=0x1e35760 00:22:20.604 [2024-10-17 16:50:34.159558] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:20.604 [2024-10-17 16:50:34.159573] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:20.604 [2024-10-17 16:50:34.159585] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159593] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159599] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e35760) 00:22:20.604 [2024-10-17 16:50:34.159609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.604 [2024-10-17 16:50:34.159630] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95480, cid 0, qid 0 00:22:20.604 [2024-10-17 16:50:34.159711] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.604 [2024-10-17 16:50:34.159729] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.604 [2024-10-17 16:50:34.159737] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159743] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95480) on tqpair=0x1e35760 00:22:20.604 [2024-10-17 16:50:34.159753] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:20.604 [2024-10-17 16:50:34.159769] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159778] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159784] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e35760) 00:22:20.604 [2024-10-17 16:50:34.159795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.604 [2024-10-17 16:50:34.159815] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95480, cid 0, qid 0 00:22:20.604 [2024-10-17 16:50:34.159890] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.604 [2024-10-17 16:50:34.159901] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.604 [2024-10-17 16:50:34.159908] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.159915] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95480) on tqpair=0x1e35760 00:22:20.604 [2024-10-17 16:50:34.159924] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:20.604 [2024-10-17 16:50:34.159933] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:20.604 [2024-10-17 16:50:34.159945] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:20.604 [2024-10-17 16:50:34.160056] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:20.604 [2024-10-17 16:50:34.160067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:20.604 [2024-10-17 16:50:34.160084] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.160091] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.160097] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e35760) 00:22:20.604 [2024-10-17 16:50:34.160107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.604 [2024-10-17 16:50:34.160129] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95480, cid 0, qid 0 00:22:20.604 [2024-10-17 16:50:34.160241] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.604 [2024-10-17 16:50:34.160255] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.604 [2024-10-17 16:50:34.160262] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.160268] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95480) on tqpair=0x1e35760 00:22:20.604 [2024-10-17 16:50:34.160277] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:20.604 [2024-10-17 16:50:34.160294] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.160303] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.160309] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e35760) 00:22:20.604 [2024-10-17 16:50:34.160319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.604 [2024-10-17 16:50:34.160340] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95480, cid 0, qid 0 00:22:20.604 [2024-10-17 16:50:34.160411] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.604 [2024-10-17 16:50:34.160423] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.604 [2024-10-17 16:50:34.160430] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.160436] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95480) on tqpair=0x1e35760 00:22:20.604 [2024-10-17 16:50:34.160444] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:20.604 [2024-10-17 16:50:34.160452] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:20.604 [2024-10-17 16:50:34.160465] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:20.604 [2024-10-17 16:50:34.160479] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:20.604 [2024-10-17 16:50:34.160498] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.160506] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e35760) 00:22:20.604 [2024-10-17 16:50:34.160516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.604 [2024-10-17 16:50:34.160537] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95480, cid 0, qid 0 00:22:20.604 [2024-10-17 16:50:34.160660] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.604 [2024-10-17 16:50:34.160674] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.604 [2024-10-17 16:50:34.160681] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.160688] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e35760): datao=0, datal=4096, cccid=0 00:22:20.604 [2024-10-17 16:50:34.160696] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e95480) on tqpair(0x1e35760): expected_datao=0, payload_size=4096 00:22:20.604 [2024-10-17 16:50:34.160704] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.160722] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.160732] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.201079] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.604 [2024-10-17 16:50:34.201098] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.604 [2024-10-17 16:50:34.201106] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.201112] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95480) on tqpair=0x1e35760 00:22:20.604 [2024-10-17 16:50:34.201126] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:20.604 [2024-10-17 16:50:34.201136] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:20.604 [2024-10-17 16:50:34.201143] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:20.604 [2024-10-17 16:50:34.201152] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:20.604 [2024-10-17 16:50:34.201159] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:20.604 [2024-10-17 16:50:34.201167] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:20.604 [2024-10-17 16:50:34.201188] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:20.604 [2024-10-17 16:50:34.201208] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.201218] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.604 [2024-10-17 16:50:34.201225] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e35760) 00:22:20.605 [2024-10-17 16:50:34.201236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.605 [2024-10-17 16:50:34.201259] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95480, cid 0, qid 0 00:22:20.605 [2024-10-17 16:50:34.201349] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.605 [2024-10-17 16:50:34.201363] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.605 [2024-10-17 16:50:34.201370] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201377] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95480) on tqpair=0x1e35760 00:22:20.605 [2024-10-17 16:50:34.201390] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201397] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201403] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e35760) 00:22:20.605 [2024-10-17 16:50:34.201413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.605 [2024-10-17 16:50:34.201423] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201430] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201436] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e35760) 00:22:20.605 [2024-10-17 16:50:34.201444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.605 [2024-10-17 16:50:34.201454] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201461] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201467] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e35760) 00:22:20.605 [2024-10-17 16:50:34.201475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.605 [2024-10-17 16:50:34.201485] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201491] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201497] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.605 [2024-10-17 16:50:34.201506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.605 [2024-10-17 16:50:34.201514] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:20.605 [2024-10-17 16:50:34.201534] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:20.605 [2024-10-17 16:50:34.201547] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201554] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e35760) 00:22:20.605 [2024-10-17 16:50:34.201565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.605 [2024-10-17 16:50:34.201587] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95480, cid 0, qid 0 00:22:20.605 [2024-10-17 16:50:34.201598] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95600, cid 1, qid 0 00:22:20.605 [2024-10-17 16:50:34.201605] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95780, cid 2, qid 0 00:22:20.605 [2024-10-17 16:50:34.201613] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.605 [2024-10-17 16:50:34.201627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95a80, cid 4, qid 0 00:22:20.605 [2024-10-17 16:50:34.201754] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.605 [2024-10-17 16:50:34.201766] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.605 [2024-10-17 16:50:34.201773] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201779] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95a80) on tqpair=0x1e35760 00:22:20.605 [2024-10-17 16:50:34.201790] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:20.605 [2024-10-17 16:50:34.201799] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:20.605 [2024-10-17 16:50:34.201817] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201826] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e35760) 00:22:20.605 [2024-10-17 16:50:34.201837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.605 [2024-10-17 16:50:34.201858] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95a80, cid 4, qid 0 00:22:20.605 [2024-10-17 16:50:34.201960] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.605 [2024-10-17 16:50:34.201975] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.605 [2024-10-17 16:50:34.201981] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.201988] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e35760): datao=0, datal=4096, cccid=4 00:22:20.605 [2024-10-17 16:50:34.201995] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e95a80) on tqpair(0x1e35760): expected_datao=0, payload_size=4096 00:22:20.605 [2024-10-17 16:50:34.202015] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.202027] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.202034] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.202046] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.605 [2024-10-17 16:50:34.202056] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.605 [2024-10-17 16:50:34.202063] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.202069] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95a80) on tqpair=0x1e35760 00:22:20.605 [2024-10-17 16:50:34.202089] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:20.605 [2024-10-17 16:50:34.202131] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.202142] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e35760) 00:22:20.605 [2024-10-17 16:50:34.202152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.605 [2024-10-17 16:50:34.202164] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.202171] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.202178] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e35760) 00:22:20.605 [2024-10-17 16:50:34.202187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.605 [2024-10-17 16:50:34.202210] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95a80, cid 4, qid 0 00:22:20.605 [2024-10-17 16:50:34.202221] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95c00, cid 5, qid 0 00:22:20.605 [2024-10-17 16:50:34.202349] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.605 [2024-10-17 16:50:34.202368] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.605 [2024-10-17 16:50:34.202375] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.202381] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e35760): datao=0, datal=1024, cccid=4 00:22:20.605 [2024-10-17 16:50:34.202389] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e95a80) on tqpair(0x1e35760): expected_datao=0, payload_size=1024 00:22:20.605 [2024-10-17 16:50:34.202396] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.202406] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.202413] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.202421] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.605 [2024-10-17 16:50:34.202430] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.605 [2024-10-17 16:50:34.202437] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.202443] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95c00) on tqpair=0x1e35760 00:22:20.605 [2024-10-17 16:50:34.243159] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.605 [2024-10-17 16:50:34.243178] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.605 [2024-10-17 16:50:34.243186] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.243193] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95a80) on tqpair=0x1e35760 00:22:20.605 [2024-10-17 16:50:34.243218] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.243228] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e35760) 00:22:20.605 [2024-10-17 16:50:34.243239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.605 [2024-10-17 16:50:34.243269] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95a80, cid 4, qid 0 00:22:20.605 [2024-10-17 16:50:34.243369] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.605 [2024-10-17 16:50:34.243384] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.605 [2024-10-17 16:50:34.243391] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.243398] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e35760): datao=0, datal=3072, cccid=4 00:22:20.605 [2024-10-17 16:50:34.243405] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e95a80) on tqpair(0x1e35760): expected_datao=0, payload_size=3072 00:22:20.605 [2024-10-17 16:50:34.243412] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.243433] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.243442] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.287016] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.605 [2024-10-17 16:50:34.287050] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.605 [2024-10-17 16:50:34.287058] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.287065] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95a80) on tqpair=0x1e35760 00:22:20.605 [2024-10-17 16:50:34.287082] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.287090] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e35760) 00:22:20.605 [2024-10-17 16:50:34.287102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.605 [2024-10-17 16:50:34.287131] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95a80, cid 4, qid 0 00:22:20.605 [2024-10-17 16:50:34.287234] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.605 [2024-10-17 16:50:34.287246] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.605 [2024-10-17 16:50:34.287263] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.287270] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e35760): datao=0, datal=8, cccid=4 00:22:20.605 [2024-10-17 16:50:34.287278] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e95a80) on tqpair(0x1e35760): expected_datao=0, payload_size=8 00:22:20.605 [2024-10-17 16:50:34.287285] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.605 [2024-10-17 16:50:34.287295] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.606 [2024-10-17 16:50:34.287302] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.870 [2024-10-17 16:50:34.328096] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.870 [2024-10-17 16:50:34.328115] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.870 [2024-10-17 16:50:34.328122] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.870 [2024-10-17 16:50:34.328129] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95a80) on tqpair=0x1e35760 00:22:20.870 ===================================================== 00:22:20.870 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:20.870 ===================================================== 00:22:20.870 Controller Capabilities/Features 00:22:20.870 ================================ 00:22:20.870 Vendor ID: 0000 00:22:20.870 Subsystem Vendor ID: 0000 00:22:20.870 Serial Number: .................... 00:22:20.870 Model Number: ........................................ 00:22:20.870 Firmware Version: 25.01 00:22:20.870 Recommended Arb Burst: 0 00:22:20.870 IEEE OUI Identifier: 00 00 00 00:22:20.870 Multi-path I/O 00:22:20.870 May have multiple subsystem ports: No 00:22:20.870 May have multiple controllers: No 00:22:20.870 Associated with SR-IOV VF: No 00:22:20.870 Max Data Transfer Size: 131072 00:22:20.870 Max Number of Namespaces: 0 00:22:20.870 Max Number of I/O Queues: 1024 00:22:20.870 NVMe Specification Version (VS): 1.3 00:22:20.870 NVMe Specification Version (Identify): 1.3 00:22:20.870 Maximum Queue Entries: 128 00:22:20.870 Contiguous Queues Required: Yes 00:22:20.870 Arbitration Mechanisms Supported 00:22:20.870 Weighted Round Robin: Not Supported 00:22:20.870 Vendor Specific: Not Supported 00:22:20.870 Reset Timeout: 15000 ms 00:22:20.870 Doorbell Stride: 4 bytes 00:22:20.870 NVM Subsystem Reset: Not Supported 00:22:20.870 Command Sets Supported 00:22:20.870 NVM Command Set: Supported 00:22:20.870 Boot Partition: Not Supported 00:22:20.870 Memory Page Size Minimum: 4096 bytes 00:22:20.870 Memory Page Size Maximum: 4096 bytes 00:22:20.870 Persistent Memory Region: Not Supported 00:22:20.870 Optional Asynchronous Events Supported 00:22:20.870 Namespace Attribute Notices: Not Supported 00:22:20.870 Firmware Activation Notices: Not Supported 00:22:20.870 ANA Change Notices: Not Supported 00:22:20.870 PLE Aggregate Log Change Notices: Not Supported 00:22:20.870 LBA Status Info Alert Notices: Not Supported 00:22:20.870 EGE Aggregate Log Change Notices: Not Supported 00:22:20.870 Normal NVM Subsystem Shutdown event: Not Supported 00:22:20.870 Zone Descriptor Change Notices: Not Supported 00:22:20.870 Discovery Log Change Notices: Supported 00:22:20.870 Controller Attributes 00:22:20.870 128-bit Host Identifier: Not Supported 00:22:20.870 Non-Operational Permissive Mode: Not Supported 00:22:20.870 NVM Sets: Not Supported 00:22:20.870 Read Recovery Levels: Not Supported 00:22:20.870 Endurance Groups: Not Supported 00:22:20.870 Predictable Latency Mode: Not Supported 00:22:20.870 Traffic Based Keep ALive: Not Supported 00:22:20.870 Namespace Granularity: Not Supported 00:22:20.870 SQ Associations: Not Supported 00:22:20.870 UUID List: Not Supported 00:22:20.870 Multi-Domain Subsystem: Not Supported 00:22:20.870 Fixed Capacity Management: Not Supported 00:22:20.870 Variable Capacity Management: Not Supported 00:22:20.870 Delete Endurance Group: Not Supported 00:22:20.870 Delete NVM Set: Not Supported 00:22:20.870 Extended LBA Formats Supported: Not Supported 00:22:20.870 Flexible Data Placement Supported: Not Supported 00:22:20.870 00:22:20.870 Controller Memory Buffer Support 00:22:20.870 ================================ 00:22:20.870 Supported: No 00:22:20.870 00:22:20.870 Persistent Memory Region Support 00:22:20.870 ================================ 00:22:20.870 Supported: No 00:22:20.870 00:22:20.870 Admin Command Set Attributes 00:22:20.870 ============================ 00:22:20.870 Security Send/Receive: Not Supported 00:22:20.870 Format NVM: Not Supported 00:22:20.870 Firmware Activate/Download: Not Supported 00:22:20.870 Namespace Management: Not Supported 00:22:20.870 Device Self-Test: Not Supported 00:22:20.870 Directives: Not Supported 00:22:20.870 NVMe-MI: Not Supported 00:22:20.870 Virtualization Management: Not Supported 00:22:20.870 Doorbell Buffer Config: Not Supported 00:22:20.870 Get LBA Status Capability: Not Supported 00:22:20.870 Command & Feature Lockdown Capability: Not Supported 00:22:20.870 Abort Command Limit: 1 00:22:20.870 Async Event Request Limit: 4 00:22:20.870 Number of Firmware Slots: N/A 00:22:20.870 Firmware Slot 1 Read-Only: N/A 00:22:20.870 Firmware Activation Without Reset: N/A 00:22:20.870 Multiple Update Detection Support: N/A 00:22:20.870 Firmware Update Granularity: No Information Provided 00:22:20.870 Per-Namespace SMART Log: No 00:22:20.870 Asymmetric Namespace Access Log Page: Not Supported 00:22:20.870 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:20.870 Command Effects Log Page: Not Supported 00:22:20.870 Get Log Page Extended Data: Supported 00:22:20.870 Telemetry Log Pages: Not Supported 00:22:20.870 Persistent Event Log Pages: Not Supported 00:22:20.870 Supported Log Pages Log Page: May Support 00:22:20.870 Commands Supported & Effects Log Page: Not Supported 00:22:20.870 Feature Identifiers & Effects Log Page:May Support 00:22:20.870 NVMe-MI Commands & Effects Log Page: May Support 00:22:20.870 Data Area 4 for Telemetry Log: Not Supported 00:22:20.870 Error Log Page Entries Supported: 128 00:22:20.870 Keep Alive: Not Supported 00:22:20.870 00:22:20.870 NVM Command Set Attributes 00:22:20.870 ========================== 00:22:20.870 Submission Queue Entry Size 00:22:20.870 Max: 1 00:22:20.870 Min: 1 00:22:20.870 Completion Queue Entry Size 00:22:20.870 Max: 1 00:22:20.870 Min: 1 00:22:20.870 Number of Namespaces: 0 00:22:20.870 Compare Command: Not Supported 00:22:20.870 Write Uncorrectable Command: Not Supported 00:22:20.870 Dataset Management Command: Not Supported 00:22:20.870 Write Zeroes Command: Not Supported 00:22:20.870 Set Features Save Field: Not Supported 00:22:20.870 Reservations: Not Supported 00:22:20.870 Timestamp: Not Supported 00:22:20.870 Copy: Not Supported 00:22:20.870 Volatile Write Cache: Not Present 00:22:20.870 Atomic Write Unit (Normal): 1 00:22:20.870 Atomic Write Unit (PFail): 1 00:22:20.870 Atomic Compare & Write Unit: 1 00:22:20.870 Fused Compare & Write: Supported 00:22:20.870 Scatter-Gather List 00:22:20.870 SGL Command Set: Supported 00:22:20.870 SGL Keyed: Supported 00:22:20.870 SGL Bit Bucket Descriptor: Not Supported 00:22:20.870 SGL Metadata Pointer: Not Supported 00:22:20.870 Oversized SGL: Not Supported 00:22:20.870 SGL Metadata Address: Not Supported 00:22:20.870 SGL Offset: Supported 00:22:20.870 Transport SGL Data Block: Not Supported 00:22:20.870 Replay Protected Memory Block: Not Supported 00:22:20.870 00:22:20.870 Firmware Slot Information 00:22:20.870 ========================= 00:22:20.870 Active slot: 0 00:22:20.870 00:22:20.870 00:22:20.870 Error Log 00:22:20.870 ========= 00:22:20.870 00:22:20.870 Active Namespaces 00:22:20.870 ================= 00:22:20.870 Discovery Log Page 00:22:20.870 ================== 00:22:20.870 Generation Counter: 2 00:22:20.870 Number of Records: 2 00:22:20.870 Record Format: 0 00:22:20.870 00:22:20.870 Discovery Log Entry 0 00:22:20.870 ---------------------- 00:22:20.870 Transport Type: 3 (TCP) 00:22:20.870 Address Family: 1 (IPv4) 00:22:20.870 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:20.870 Entry Flags: 00:22:20.870 Duplicate Returned Information: 1 00:22:20.870 Explicit Persistent Connection Support for Discovery: 1 00:22:20.870 Transport Requirements: 00:22:20.870 Secure Channel: Not Required 00:22:20.870 Port ID: 0 (0x0000) 00:22:20.870 Controller ID: 65535 (0xffff) 00:22:20.870 Admin Max SQ Size: 128 00:22:20.870 Transport Service Identifier: 4420 00:22:20.870 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:20.870 Transport Address: 10.0.0.2 00:22:20.870 Discovery Log Entry 1 00:22:20.870 ---------------------- 00:22:20.870 Transport Type: 3 (TCP) 00:22:20.870 Address Family: 1 (IPv4) 00:22:20.870 Subsystem Type: 2 (NVM Subsystem) 00:22:20.870 Entry Flags: 00:22:20.870 Duplicate Returned Information: 0 00:22:20.870 Explicit Persistent Connection Support for Discovery: 0 00:22:20.870 Transport Requirements: 00:22:20.870 Secure Channel: Not Required 00:22:20.870 Port ID: 0 (0x0000) 00:22:20.870 Controller ID: 65535 (0xffff) 00:22:20.870 Admin Max SQ Size: 128 00:22:20.870 Transport Service Identifier: 4420 00:22:20.870 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:20.870 Transport Address: 10.0.0.2 [2024-10-17 16:50:34.328247] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:20.870 [2024-10-17 16:50:34.328269] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95480) on tqpair=0x1e35760 00:22:20.870 [2024-10-17 16:50:34.328283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.871 [2024-10-17 16:50:34.328292] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95600) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.328299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.871 [2024-10-17 16:50:34.328307] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95780) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.328314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.871 [2024-10-17 16:50:34.328322] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.328329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.871 [2024-10-17 16:50:34.328343] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.328351] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.328357] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.871 [2024-10-17 16:50:34.328367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.871 [2024-10-17 16:50:34.328393] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.871 [2024-10-17 16:50:34.328499] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.871 [2024-10-17 16:50:34.328511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.871 [2024-10-17 16:50:34.328518] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.328525] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.328538] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.328546] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.328552] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.871 [2024-10-17 16:50:34.328562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.871 [2024-10-17 16:50:34.328589] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.871 [2024-10-17 16:50:34.328682] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.871 [2024-10-17 16:50:34.328696] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.871 [2024-10-17 16:50:34.328709] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.328717] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.328727] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:20.871 [2024-10-17 16:50:34.328735] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:20.871 [2024-10-17 16:50:34.328751] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.328760] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.328767] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.871 [2024-10-17 16:50:34.328777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.871 [2024-10-17 16:50:34.328797] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.871 [2024-10-17 16:50:34.328876] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.871 [2024-10-17 16:50:34.328890] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.871 [2024-10-17 16:50:34.328896] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.328903] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.328921] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.328931] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.328937] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.871 [2024-10-17 16:50:34.328947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.871 [2024-10-17 16:50:34.328967] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.871 [2024-10-17 16:50:34.329053] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.871 [2024-10-17 16:50:34.329068] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.871 [2024-10-17 16:50:34.329075] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329082] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.329098] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329108] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329114] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.871 [2024-10-17 16:50:34.329124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.871 [2024-10-17 16:50:34.329145] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.871 [2024-10-17 16:50:34.329227] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.871 [2024-10-17 16:50:34.329241] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.871 [2024-10-17 16:50:34.329248] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329254] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.329271] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329280] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329287] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.871 [2024-10-17 16:50:34.329297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.871 [2024-10-17 16:50:34.329317] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.871 [2024-10-17 16:50:34.329394] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.871 [2024-10-17 16:50:34.329408] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.871 [2024-10-17 16:50:34.329415] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329421] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.329438] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329447] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329453] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.871 [2024-10-17 16:50:34.329464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.871 [2024-10-17 16:50:34.329484] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.871 [2024-10-17 16:50:34.329561] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.871 [2024-10-17 16:50:34.329575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.871 [2024-10-17 16:50:34.329582] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329588] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.329605] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329614] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329621] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.871 [2024-10-17 16:50:34.329631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.871 [2024-10-17 16:50:34.329651] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.871 [2024-10-17 16:50:34.329729] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.871 [2024-10-17 16:50:34.329742] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.871 [2024-10-17 16:50:34.329749] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.329772] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329781] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329787] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.871 [2024-10-17 16:50:34.329798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.871 [2024-10-17 16:50:34.329818] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.871 [2024-10-17 16:50:34.329896] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.871 [2024-10-17 16:50:34.329910] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.871 [2024-10-17 16:50:34.329916] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329923] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.329939] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329949] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.329955] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.871 [2024-10-17 16:50:34.329965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.871 [2024-10-17 16:50:34.329985] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.871 [2024-10-17 16:50:34.330084] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.871 [2024-10-17 16:50:34.330101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.871 [2024-10-17 16:50:34.330109] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.330115] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.330132] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.330141] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.330147] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.871 [2024-10-17 16:50:34.330158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.871 [2024-10-17 16:50:34.330179] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.871 [2024-10-17 16:50:34.330265] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.871 [2024-10-17 16:50:34.330279] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.871 [2024-10-17 16:50:34.330286] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.330292] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.871 [2024-10-17 16:50:34.330309] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.330318] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.871 [2024-10-17 16:50:34.330324] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.871 [2024-10-17 16:50:34.330335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.872 [2024-10-17 16:50:34.330355] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.872 [2024-10-17 16:50:34.330427] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.872 [2024-10-17 16:50:34.330441] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.872 [2024-10-17 16:50:34.330447] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.330454] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.872 [2024-10-17 16:50:34.330470] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.330480] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.330486] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.872 [2024-10-17 16:50:34.330496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.872 [2024-10-17 16:50:34.330516] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.872 [2024-10-17 16:50:34.330591] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.872 [2024-10-17 16:50:34.330604] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.872 [2024-10-17 16:50:34.330611] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.330617] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.872 [2024-10-17 16:50:34.330634] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.330643] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.330649] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.872 [2024-10-17 16:50:34.330660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.872 [2024-10-17 16:50:34.330680] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.872 [2024-10-17 16:50:34.330753] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.872 [2024-10-17 16:50:34.330765] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.872 [2024-10-17 16:50:34.330775] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.330782] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.872 [2024-10-17 16:50:34.330799] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.330808] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.330814] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.872 [2024-10-17 16:50:34.330824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.872 [2024-10-17 16:50:34.330844] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.872 [2024-10-17 16:50:34.330920] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.872 [2024-10-17 16:50:34.330933] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.872 [2024-10-17 16:50:34.330940] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.330947] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.872 [2024-10-17 16:50:34.330963] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.330972] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.330979] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e35760) 00:22:20.872 [2024-10-17 16:50:34.330989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.872 [2024-10-17 16:50:34.335018] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e95900, cid 3, qid 0 00:22:20.872 [2024-10-17 16:50:34.335169] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.872 [2024-10-17 16:50:34.335182] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.872 [2024-10-17 16:50:34.335189] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.335195] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e95900) on tqpair=0x1e35760 00:22:20.872 [2024-10-17 16:50:34.335209] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:20.872 00:22:20.872 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:20.872 [2024-10-17 16:50:34.371123] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:22:20.872 [2024-10-17 16:50:34.371169] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2412038 ] 00:22:20.872 [2024-10-17 16:50:34.405763] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:20.872 [2024-10-17 16:50:34.405812] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:20.872 [2024-10-17 16:50:34.405822] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:20.872 [2024-10-17 16:50:34.405838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:20.872 [2024-10-17 16:50:34.405850] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:20.872 [2024-10-17 16:50:34.406308] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:20.872 [2024-10-17 16:50:34.406349] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ce8760 0 00:22:20.872 [2024-10-17 16:50:34.412016] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:20.872 [2024-10-17 16:50:34.412036] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:20.872 [2024-10-17 16:50:34.412045] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:20.872 [2024-10-17 16:50:34.412051] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:20.872 [2024-10-17 16:50:34.412084] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.412097] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.412103] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ce8760) 00:22:20.872 [2024-10-17 16:50:34.412117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:20.872 [2024-10-17 16:50:34.412144] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48480, cid 0, qid 0 00:22:20.872 [2024-10-17 16:50:34.423013] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.872 [2024-10-17 16:50:34.423032] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.872 [2024-10-17 16:50:34.423040] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.423047] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48480) on tqpair=0x1ce8760 00:22:20.872 [2024-10-17 16:50:34.423067] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:20.872 [2024-10-17 16:50:34.423080] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:20.872 [2024-10-17 16:50:34.423089] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:20.872 [2024-10-17 16:50:34.423106] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.423117] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.423125] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ce8760) 00:22:20.872 [2024-10-17 16:50:34.423136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.872 [2024-10-17 16:50:34.423161] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48480, cid 0, qid 0 00:22:20.872 [2024-10-17 16:50:34.423284] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.872 [2024-10-17 16:50:34.423300] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.872 [2024-10-17 16:50:34.423307] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.423314] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48480) on tqpair=0x1ce8760 00:22:20.872 [2024-10-17 16:50:34.423322] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:20.872 [2024-10-17 16:50:34.423336] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:20.872 [2024-10-17 16:50:34.423351] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.423359] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.423365] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ce8760) 00:22:20.872 [2024-10-17 16:50:34.423376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.872 [2024-10-17 16:50:34.423398] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48480, cid 0, qid 0 00:22:20.872 [2024-10-17 16:50:34.423482] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.872 [2024-10-17 16:50:34.423497] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.872 [2024-10-17 16:50:34.423504] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.423510] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48480) on tqpair=0x1ce8760 00:22:20.872 [2024-10-17 16:50:34.423523] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:20.872 [2024-10-17 16:50:34.423539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:20.872 [2024-10-17 16:50:34.423554] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.423562] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.423568] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ce8760) 00:22:20.872 [2024-10-17 16:50:34.423579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.872 [2024-10-17 16:50:34.423601] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48480, cid 0, qid 0 00:22:20.872 [2024-10-17 16:50:34.423679] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.872 [2024-10-17 16:50:34.423694] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.872 [2024-10-17 16:50:34.423701] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.423708] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48480) on tqpair=0x1ce8760 00:22:20.872 [2024-10-17 16:50:34.423717] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:20.872 [2024-10-17 16:50:34.423737] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.423747] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.872 [2024-10-17 16:50:34.423753] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ce8760) 00:22:20.872 [2024-10-17 16:50:34.423764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.872 [2024-10-17 16:50:34.423789] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48480, cid 0, qid 0 00:22:20.872 [2024-10-17 16:50:34.423867] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.872 [2024-10-17 16:50:34.423882] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.873 [2024-10-17 16:50:34.423889] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.423895] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48480) on tqpair=0x1ce8760 00:22:20.873 [2024-10-17 16:50:34.423903] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:20.873 [2024-10-17 16:50:34.423911] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:20.873 [2024-10-17 16:50:34.423925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:20.873 [2024-10-17 16:50:34.424038] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:20.873 [2024-10-17 16:50:34.424047] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:20.873 [2024-10-17 16:50:34.424060] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424068] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424074] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ce8760) 00:22:20.873 [2024-10-17 16:50:34.424084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.873 [2024-10-17 16:50:34.424107] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48480, cid 0, qid 0 00:22:20.873 [2024-10-17 16:50:34.424216] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.873 [2024-10-17 16:50:34.424232] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.873 [2024-10-17 16:50:34.424242] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424250] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48480) on tqpair=0x1ce8760 00:22:20.873 [2024-10-17 16:50:34.424261] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:20.873 [2024-10-17 16:50:34.424280] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424289] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424295] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ce8760) 00:22:20.873 [2024-10-17 16:50:34.424306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.873 [2024-10-17 16:50:34.424332] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48480, cid 0, qid 0 00:22:20.873 [2024-10-17 16:50:34.424415] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.873 [2024-10-17 16:50:34.424430] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.873 [2024-10-17 16:50:34.424437] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424443] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48480) on tqpair=0x1ce8760 00:22:20.873 [2024-10-17 16:50:34.424451] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:20.873 [2024-10-17 16:50:34.424462] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:20.873 [2024-10-17 16:50:34.424476] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:20.873 [2024-10-17 16:50:34.424495] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:20.873 [2024-10-17 16:50:34.424512] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424521] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ce8760) 00:22:20.873 [2024-10-17 16:50:34.424533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.873 [2024-10-17 16:50:34.424555] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48480, cid 0, qid 0 00:22:20.873 [2024-10-17 16:50:34.424688] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.873 [2024-10-17 16:50:34.424704] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.873 [2024-10-17 16:50:34.424711] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424719] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ce8760): datao=0, datal=4096, cccid=0 00:22:20.873 [2024-10-17 16:50:34.424733] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d48480) on tqpair(0x1ce8760): expected_datao=0, payload_size=4096 00:22:20.873 [2024-10-17 16:50:34.424742] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424753] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424760] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424772] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.873 [2024-10-17 16:50:34.424782] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.873 [2024-10-17 16:50:34.424788] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424795] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48480) on tqpair=0x1ce8760 00:22:20.873 [2024-10-17 16:50:34.424806] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:20.873 [2024-10-17 16:50:34.424814] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:20.873 [2024-10-17 16:50:34.424825] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:20.873 [2024-10-17 16:50:34.424832] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:20.873 [2024-10-17 16:50:34.424840] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:20.873 [2024-10-17 16:50:34.424847] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:20.873 [2024-10-17 16:50:34.424868] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:20.873 [2024-10-17 16:50:34.424885] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424894] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.424900] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ce8760) 00:22:20.873 [2024-10-17 16:50:34.424911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.873 [2024-10-17 16:50:34.424934] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48480, cid 0, qid 0 00:22:20.873 [2024-10-17 16:50:34.425030] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.873 [2024-10-17 16:50:34.425046] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.873 [2024-10-17 16:50:34.425053] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425060] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48480) on tqpair=0x1ce8760 00:22:20.873 [2024-10-17 16:50:34.425072] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425081] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425087] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ce8760) 00:22:20.873 [2024-10-17 16:50:34.425097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.873 [2024-10-17 16:50:34.425108] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425114] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425120] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ce8760) 00:22:20.873 [2024-10-17 16:50:34.425129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.873 [2024-10-17 16:50:34.425139] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425145] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425151] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ce8760) 00:22:20.873 [2024-10-17 16:50:34.425160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.873 [2024-10-17 16:50:34.425170] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425176] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425182] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ce8760) 00:22:20.873 [2024-10-17 16:50:34.425191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.873 [2024-10-17 16:50:34.425200] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:20.873 [2024-10-17 16:50:34.425220] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:20.873 [2024-10-17 16:50:34.425238] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425246] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ce8760) 00:22:20.873 [2024-10-17 16:50:34.425257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.873 [2024-10-17 16:50:34.425294] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48480, cid 0, qid 0 00:22:20.873 [2024-10-17 16:50:34.425306] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48600, cid 1, qid 0 00:22:20.873 [2024-10-17 16:50:34.425313] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48780, cid 2, qid 0 00:22:20.873 [2024-10-17 16:50:34.425320] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48900, cid 3, qid 0 00:22:20.873 [2024-10-17 16:50:34.425328] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48a80, cid 4, qid 0 00:22:20.873 [2024-10-17 16:50:34.425510] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.873 [2024-10-17 16:50:34.425525] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.873 [2024-10-17 16:50:34.425532] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425539] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48a80) on tqpair=0x1ce8760 00:22:20.873 [2024-10-17 16:50:34.425549] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:20.873 [2024-10-17 16:50:34.425561] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:20.873 [2024-10-17 16:50:34.425576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:20.873 [2024-10-17 16:50:34.425592] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:20.873 [2024-10-17 16:50:34.425608] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425615] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.873 [2024-10-17 16:50:34.425621] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ce8760) 00:22:20.873 [2024-10-17 16:50:34.425632] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.874 [2024-10-17 16:50:34.425669] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48a80, cid 4, qid 0 00:22:20.874 [2024-10-17 16:50:34.425766] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.874 [2024-10-17 16:50:34.425781] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.874 [2024-10-17 16:50:34.425788] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.425794] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48a80) on tqpair=0x1ce8760 00:22:20.874 [2024-10-17 16:50:34.425869] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.425893] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.425911] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.425919] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ce8760) 00:22:20.874 [2024-10-17 16:50:34.425930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.874 [2024-10-17 16:50:34.425952] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48a80, cid 4, qid 0 00:22:20.874 [2024-10-17 16:50:34.426049] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.874 [2024-10-17 16:50:34.426069] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.874 [2024-10-17 16:50:34.426085] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426092] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ce8760): datao=0, datal=4096, cccid=4 00:22:20.874 [2024-10-17 16:50:34.426099] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d48a80) on tqpair(0x1ce8760): expected_datao=0, payload_size=4096 00:22:20.874 [2024-10-17 16:50:34.426109] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426133] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426143] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426154] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.874 [2024-10-17 16:50:34.426164] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.874 [2024-10-17 16:50:34.426171] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426177] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48a80) on tqpair=0x1ce8760 00:22:20.874 [2024-10-17 16:50:34.426200] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:20.874 [2024-10-17 16:50:34.426218] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.426237] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.426256] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426270] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ce8760) 00:22:20.874 [2024-10-17 16:50:34.426286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.874 [2024-10-17 16:50:34.426311] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48a80, cid 4, qid 0 00:22:20.874 [2024-10-17 16:50:34.426419] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.874 [2024-10-17 16:50:34.426439] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.874 [2024-10-17 16:50:34.426449] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426455] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ce8760): datao=0, datal=4096, cccid=4 00:22:20.874 [2024-10-17 16:50:34.426463] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d48a80) on tqpair(0x1ce8760): expected_datao=0, payload_size=4096 00:22:20.874 [2024-10-17 16:50:34.426473] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426495] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426504] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426516] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.874 [2024-10-17 16:50:34.426526] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.874 [2024-10-17 16:50:34.426532] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426539] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48a80) on tqpair=0x1ce8760 00:22:20.874 [2024-10-17 16:50:34.426557] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.426577] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.426594] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426601] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ce8760) 00:22:20.874 [2024-10-17 16:50:34.426612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.874 [2024-10-17 16:50:34.426639] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48a80, cid 4, qid 0 00:22:20.874 [2024-10-17 16:50:34.426747] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.874 [2024-10-17 16:50:34.426762] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.874 [2024-10-17 16:50:34.426769] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426778] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ce8760): datao=0, datal=4096, cccid=4 00:22:20.874 [2024-10-17 16:50:34.426787] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d48a80) on tqpair(0x1ce8760): expected_datao=0, payload_size=4096 00:22:20.874 [2024-10-17 16:50:34.426800] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426813] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426821] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426832] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.874 [2024-10-17 16:50:34.426842] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.874 [2024-10-17 16:50:34.426849] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426855] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48a80) on tqpair=0x1ce8760 00:22:20.874 [2024-10-17 16:50:34.426875] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.426892] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.426912] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.426925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.426934] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.426943] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.426952] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:20.874 [2024-10-17 16:50:34.426960] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:20.874 [2024-10-17 16:50:34.426968] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:20.874 [2024-10-17 16:50:34.426987] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.426995] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ce8760) 00:22:20.874 [2024-10-17 16:50:34.431016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.874 [2024-10-17 16:50:34.431033] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.431040] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.431046] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ce8760) 00:22:20.874 [2024-10-17 16:50:34.431055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.874 [2024-10-17 16:50:34.431078] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48a80, cid 4, qid 0 00:22:20.874 [2024-10-17 16:50:34.431103] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48c00, cid 5, qid 0 00:22:20.874 [2024-10-17 16:50:34.431237] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.874 [2024-10-17 16:50:34.431257] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.874 [2024-10-17 16:50:34.431265] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.874 [2024-10-17 16:50:34.431272] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48a80) on tqpair=0x1ce8760 00:22:20.874 [2024-10-17 16:50:34.431282] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.874 [2024-10-17 16:50:34.431291] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.874 [2024-10-17 16:50:34.431298] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.431304] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48c00) on tqpair=0x1ce8760 00:22:20.875 [2024-10-17 16:50:34.431322] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.431333] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ce8760) 00:22:20.875 [2024-10-17 16:50:34.431344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.875 [2024-10-17 16:50:34.431366] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48c00, cid 5, qid 0 00:22:20.875 [2024-10-17 16:50:34.431449] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.875 [2024-10-17 16:50:34.431464] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.875 [2024-10-17 16:50:34.431471] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.431478] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48c00) on tqpair=0x1ce8760 00:22:20.875 [2024-10-17 16:50:34.431496] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.431506] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ce8760) 00:22:20.875 [2024-10-17 16:50:34.431517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.875 [2024-10-17 16:50:34.431539] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48c00, cid 5, qid 0 00:22:20.875 [2024-10-17 16:50:34.431622] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.875 [2024-10-17 16:50:34.431637] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.875 [2024-10-17 16:50:34.431643] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.431650] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48c00) on tqpair=0x1ce8760 00:22:20.875 [2024-10-17 16:50:34.431668] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.431678] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ce8760) 00:22:20.875 [2024-10-17 16:50:34.431689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.875 [2024-10-17 16:50:34.431710] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48c00, cid 5, qid 0 00:22:20.875 [2024-10-17 16:50:34.431793] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.875 [2024-10-17 16:50:34.431807] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.875 [2024-10-17 16:50:34.431814] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.431821] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48c00) on tqpair=0x1ce8760 00:22:20.875 [2024-10-17 16:50:34.431847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.431859] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ce8760) 00:22:20.875 [2024-10-17 16:50:34.431870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.875 [2024-10-17 16:50:34.431883] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.431890] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ce8760) 00:22:20.875 [2024-10-17 16:50:34.431904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.875 [2024-10-17 16:50:34.431917] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.431925] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ce8760) 00:22:20.875 [2024-10-17 16:50:34.431934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.875 [2024-10-17 16:50:34.431946] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.431953] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ce8760) 00:22:20.875 [2024-10-17 16:50:34.431963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.875 [2024-10-17 16:50:34.431999] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48c00, cid 5, qid 0 00:22:20.875 [2024-10-17 16:50:34.432021] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48a80, cid 4, qid 0 00:22:20.875 [2024-10-17 16:50:34.432028] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48d80, cid 6, qid 0 00:22:20.875 [2024-10-17 16:50:34.432035] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48f00, cid 7, qid 0 00:22:20.875 [2024-10-17 16:50:34.432214] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.875 [2024-10-17 16:50:34.432235] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.875 [2024-10-17 16:50:34.432248] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432255] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ce8760): datao=0, datal=8192, cccid=5 00:22:20.875 [2024-10-17 16:50:34.432262] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d48c00) on tqpair(0x1ce8760): expected_datao=0, payload_size=8192 00:22:20.875 [2024-10-17 16:50:34.432270] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432290] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432299] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432310] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.875 [2024-10-17 16:50:34.432320] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.875 [2024-10-17 16:50:34.432327] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432333] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ce8760): datao=0, datal=512, cccid=4 00:22:20.875 [2024-10-17 16:50:34.432341] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d48a80) on tqpair(0x1ce8760): expected_datao=0, payload_size=512 00:22:20.875 [2024-10-17 16:50:34.432354] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432367] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432374] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432383] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.875 [2024-10-17 16:50:34.432392] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.875 [2024-10-17 16:50:34.432398] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432404] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ce8760): datao=0, datal=512, cccid=6 00:22:20.875 [2024-10-17 16:50:34.432411] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d48d80) on tqpair(0x1ce8760): expected_datao=0, payload_size=512 00:22:20.875 [2024-10-17 16:50:34.432418] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432427] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432438] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432447] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.875 [2024-10-17 16:50:34.432456] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.875 [2024-10-17 16:50:34.432463] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432469] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ce8760): datao=0, datal=4096, cccid=7 00:22:20.875 [2024-10-17 16:50:34.432476] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d48f00) on tqpair(0x1ce8760): expected_datao=0, payload_size=4096 00:22:20.875 [2024-10-17 16:50:34.432483] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432493] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432500] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432511] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.875 [2024-10-17 16:50:34.432521] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.875 [2024-10-17 16:50:34.432527] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432548] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48c00) on tqpair=0x1ce8760 00:22:20.875 [2024-10-17 16:50:34.432568] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.875 [2024-10-17 16:50:34.432579] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.875 [2024-10-17 16:50:34.432585] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432591] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48a80) on tqpair=0x1ce8760 00:22:20.875 [2024-10-17 16:50:34.432624] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.875 [2024-10-17 16:50:34.432634] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.875 [2024-10-17 16:50:34.432641] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432647] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48d80) on tqpair=0x1ce8760 00:22:20.875 [2024-10-17 16:50:34.432657] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.875 [2024-10-17 16:50:34.432667] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.875 [2024-10-17 16:50:34.432673] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.875 [2024-10-17 16:50:34.432679] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48f00) on tqpair=0x1ce8760 00:22:20.875 ===================================================== 00:22:20.875 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:20.875 ===================================================== 00:22:20.875 Controller Capabilities/Features 00:22:20.875 ================================ 00:22:20.875 Vendor ID: 8086 00:22:20.875 Subsystem Vendor ID: 8086 00:22:20.875 Serial Number: SPDK00000000000001 00:22:20.875 Model Number: SPDK bdev Controller 00:22:20.875 Firmware Version: 25.01 00:22:20.875 Recommended Arb Burst: 6 00:22:20.875 IEEE OUI Identifier: e4 d2 5c 00:22:20.875 Multi-path I/O 00:22:20.875 May have multiple subsystem ports: Yes 00:22:20.875 May have multiple controllers: Yes 00:22:20.875 Associated with SR-IOV VF: No 00:22:20.875 Max Data Transfer Size: 131072 00:22:20.875 Max Number of Namespaces: 32 00:22:20.875 Max Number of I/O Queues: 127 00:22:20.875 NVMe Specification Version (VS): 1.3 00:22:20.875 NVMe Specification Version (Identify): 1.3 00:22:20.875 Maximum Queue Entries: 128 00:22:20.875 Contiguous Queues Required: Yes 00:22:20.875 Arbitration Mechanisms Supported 00:22:20.875 Weighted Round Robin: Not Supported 00:22:20.875 Vendor Specific: Not Supported 00:22:20.875 Reset Timeout: 15000 ms 00:22:20.875 Doorbell Stride: 4 bytes 00:22:20.875 NVM Subsystem Reset: Not Supported 00:22:20.875 Command Sets Supported 00:22:20.875 NVM Command Set: Supported 00:22:20.875 Boot Partition: Not Supported 00:22:20.875 Memory Page Size Minimum: 4096 bytes 00:22:20.875 Memory Page Size Maximum: 4096 bytes 00:22:20.875 Persistent Memory Region: Not Supported 00:22:20.875 Optional Asynchronous Events Supported 00:22:20.875 Namespace Attribute Notices: Supported 00:22:20.875 Firmware Activation Notices: Not Supported 00:22:20.875 ANA Change Notices: Not Supported 00:22:20.875 PLE Aggregate Log Change Notices: Not Supported 00:22:20.876 LBA Status Info Alert Notices: Not Supported 00:22:20.876 EGE Aggregate Log Change Notices: Not Supported 00:22:20.876 Normal NVM Subsystem Shutdown event: Not Supported 00:22:20.876 Zone Descriptor Change Notices: Not Supported 00:22:20.876 Discovery Log Change Notices: Not Supported 00:22:20.876 Controller Attributes 00:22:20.876 128-bit Host Identifier: Supported 00:22:20.876 Non-Operational Permissive Mode: Not Supported 00:22:20.876 NVM Sets: Not Supported 00:22:20.876 Read Recovery Levels: Not Supported 00:22:20.876 Endurance Groups: Not Supported 00:22:20.876 Predictable Latency Mode: Not Supported 00:22:20.876 Traffic Based Keep ALive: Not Supported 00:22:20.876 Namespace Granularity: Not Supported 00:22:20.876 SQ Associations: Not Supported 00:22:20.876 UUID List: Not Supported 00:22:20.876 Multi-Domain Subsystem: Not Supported 00:22:20.876 Fixed Capacity Management: Not Supported 00:22:20.876 Variable Capacity Management: Not Supported 00:22:20.876 Delete Endurance Group: Not Supported 00:22:20.876 Delete NVM Set: Not Supported 00:22:20.876 Extended LBA Formats Supported: Not Supported 00:22:20.876 Flexible Data Placement Supported: Not Supported 00:22:20.876 00:22:20.876 Controller Memory Buffer Support 00:22:20.876 ================================ 00:22:20.876 Supported: No 00:22:20.876 00:22:20.876 Persistent Memory Region Support 00:22:20.876 ================================ 00:22:20.876 Supported: No 00:22:20.876 00:22:20.876 Admin Command Set Attributes 00:22:20.876 ============================ 00:22:20.876 Security Send/Receive: Not Supported 00:22:20.876 Format NVM: Not Supported 00:22:20.876 Firmware Activate/Download: Not Supported 00:22:20.876 Namespace Management: Not Supported 00:22:20.876 Device Self-Test: Not Supported 00:22:20.876 Directives: Not Supported 00:22:20.876 NVMe-MI: Not Supported 00:22:20.876 Virtualization Management: Not Supported 00:22:20.876 Doorbell Buffer Config: Not Supported 00:22:20.876 Get LBA Status Capability: Not Supported 00:22:20.876 Command & Feature Lockdown Capability: Not Supported 00:22:20.876 Abort Command Limit: 4 00:22:20.876 Async Event Request Limit: 4 00:22:20.876 Number of Firmware Slots: N/A 00:22:20.876 Firmware Slot 1 Read-Only: N/A 00:22:20.876 Firmware Activation Without Reset: N/A 00:22:20.876 Multiple Update Detection Support: N/A 00:22:20.876 Firmware Update Granularity: No Information Provided 00:22:20.876 Per-Namespace SMART Log: No 00:22:20.876 Asymmetric Namespace Access Log Page: Not Supported 00:22:20.876 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:20.876 Command Effects Log Page: Supported 00:22:20.876 Get Log Page Extended Data: Supported 00:22:20.876 Telemetry Log Pages: Not Supported 00:22:20.876 Persistent Event Log Pages: Not Supported 00:22:20.876 Supported Log Pages Log Page: May Support 00:22:20.876 Commands Supported & Effects Log Page: Not Supported 00:22:20.876 Feature Identifiers & Effects Log Page:May Support 00:22:20.876 NVMe-MI Commands & Effects Log Page: May Support 00:22:20.876 Data Area 4 for Telemetry Log: Not Supported 00:22:20.876 Error Log Page Entries Supported: 128 00:22:20.876 Keep Alive: Supported 00:22:20.876 Keep Alive Granularity: 10000 ms 00:22:20.876 00:22:20.876 NVM Command Set Attributes 00:22:20.876 ========================== 00:22:20.876 Submission Queue Entry Size 00:22:20.876 Max: 64 00:22:20.876 Min: 64 00:22:20.876 Completion Queue Entry Size 00:22:20.876 Max: 16 00:22:20.876 Min: 16 00:22:20.876 Number of Namespaces: 32 00:22:20.876 Compare Command: Supported 00:22:20.876 Write Uncorrectable Command: Not Supported 00:22:20.876 Dataset Management Command: Supported 00:22:20.876 Write Zeroes Command: Supported 00:22:20.876 Set Features Save Field: Not Supported 00:22:20.876 Reservations: Supported 00:22:20.876 Timestamp: Not Supported 00:22:20.876 Copy: Supported 00:22:20.876 Volatile Write Cache: Present 00:22:20.876 Atomic Write Unit (Normal): 1 00:22:20.876 Atomic Write Unit (PFail): 1 00:22:20.876 Atomic Compare & Write Unit: 1 00:22:20.876 Fused Compare & Write: Supported 00:22:20.876 Scatter-Gather List 00:22:20.876 SGL Command Set: Supported 00:22:20.876 SGL Keyed: Supported 00:22:20.876 SGL Bit Bucket Descriptor: Not Supported 00:22:20.876 SGL Metadata Pointer: Not Supported 00:22:20.876 Oversized SGL: Not Supported 00:22:20.876 SGL Metadata Address: Not Supported 00:22:20.876 SGL Offset: Supported 00:22:20.876 Transport SGL Data Block: Not Supported 00:22:20.876 Replay Protected Memory Block: Not Supported 00:22:20.876 00:22:20.876 Firmware Slot Information 00:22:20.876 ========================= 00:22:20.876 Active slot: 1 00:22:20.876 Slot 1 Firmware Revision: 25.01 00:22:20.876 00:22:20.876 00:22:20.876 Commands Supported and Effects 00:22:20.876 ============================== 00:22:20.876 Admin Commands 00:22:20.876 -------------- 00:22:20.876 Get Log Page (02h): Supported 00:22:20.876 Identify (06h): Supported 00:22:20.876 Abort (08h): Supported 00:22:20.876 Set Features (09h): Supported 00:22:20.876 Get Features (0Ah): Supported 00:22:20.876 Asynchronous Event Request (0Ch): Supported 00:22:20.876 Keep Alive (18h): Supported 00:22:20.876 I/O Commands 00:22:20.876 ------------ 00:22:20.876 Flush (00h): Supported LBA-Change 00:22:20.876 Write (01h): Supported LBA-Change 00:22:20.876 Read (02h): Supported 00:22:20.876 Compare (05h): Supported 00:22:20.876 Write Zeroes (08h): Supported LBA-Change 00:22:20.876 Dataset Management (09h): Supported LBA-Change 00:22:20.876 Copy (19h): Supported LBA-Change 00:22:20.876 00:22:20.876 Error Log 00:22:20.876 ========= 00:22:20.876 00:22:20.876 Arbitration 00:22:20.876 =========== 00:22:20.876 Arbitration Burst: 1 00:22:20.876 00:22:20.876 Power Management 00:22:20.876 ================ 00:22:20.876 Number of Power States: 1 00:22:20.876 Current Power State: Power State #0 00:22:20.876 Power State #0: 00:22:20.876 Max Power: 0.00 W 00:22:20.876 Non-Operational State: Operational 00:22:20.876 Entry Latency: Not Reported 00:22:20.876 Exit Latency: Not Reported 00:22:20.876 Relative Read Throughput: 0 00:22:20.876 Relative Read Latency: 0 00:22:20.876 Relative Write Throughput: 0 00:22:20.876 Relative Write Latency: 0 00:22:20.876 Idle Power: Not Reported 00:22:20.876 Active Power: Not Reported 00:22:20.876 Non-Operational Permissive Mode: Not Supported 00:22:20.876 00:22:20.876 Health Information 00:22:20.876 ================== 00:22:20.876 Critical Warnings: 00:22:20.876 Available Spare Space: OK 00:22:20.876 Temperature: OK 00:22:20.876 Device Reliability: OK 00:22:20.876 Read Only: No 00:22:20.876 Volatile Memory Backup: OK 00:22:20.876 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:20.876 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:20.876 Available Spare: 0% 00:22:20.876 Available Spare Threshold: 0% 00:22:20.876 Life Percentage Used:[2024-10-17 16:50:34.432787] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.876 [2024-10-17 16:50:34.432798] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ce8760) 00:22:20.876 [2024-10-17 16:50:34.432809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.876 [2024-10-17 16:50:34.432831] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48f00, cid 7, qid 0 00:22:20.876 [2024-10-17 16:50:34.432962] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.876 [2024-10-17 16:50:34.432979] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.876 [2024-10-17 16:50:34.432987] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.876 [2024-10-17 16:50:34.432993] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48f00) on tqpair=0x1ce8760 00:22:20.876 [2024-10-17 16:50:34.433049] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:20.876 [2024-10-17 16:50:34.433071] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48480) on tqpair=0x1ce8760 00:22:20.876 [2024-10-17 16:50:34.433085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.876 [2024-10-17 16:50:34.433094] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48600) on tqpair=0x1ce8760 00:22:20.876 [2024-10-17 16:50:34.433102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.876 [2024-10-17 16:50:34.433113] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48780) on tqpair=0x1ce8760 00:22:20.876 [2024-10-17 16:50:34.433121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.876 [2024-10-17 16:50:34.433129] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48900) on tqpair=0x1ce8760 00:22:20.876 [2024-10-17 16:50:34.433136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.876 [2024-10-17 16:50:34.433148] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.876 [2024-10-17 16:50:34.433157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.876 [2024-10-17 16:50:34.433163] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ce8760) 00:22:20.876 [2024-10-17 16:50:34.433173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.876 [2024-10-17 16:50:34.433196] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48900, cid 3, qid 0 00:22:20.876 [2024-10-17 16:50:34.433331] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.876 [2024-10-17 16:50:34.433347] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.876 [2024-10-17 16:50:34.433354] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.876 [2024-10-17 16:50:34.433361] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48900) on tqpair=0x1ce8760 00:22:20.876 [2024-10-17 16:50:34.433376] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.876 [2024-10-17 16:50:34.433384] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.433391] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ce8760) 00:22:20.877 [2024-10-17 16:50:34.433401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.877 [2024-10-17 16:50:34.433430] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48900, cid 3, qid 0 00:22:20.877 [2024-10-17 16:50:34.433534] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.877 [2024-10-17 16:50:34.433548] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.877 [2024-10-17 16:50:34.433555] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.433561] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48900) on tqpair=0x1ce8760 00:22:20.877 [2024-10-17 16:50:34.433569] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:20.877 [2024-10-17 16:50:34.433576] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:20.877 [2024-10-17 16:50:34.433593] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.433604] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.433610] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ce8760) 00:22:20.877 [2024-10-17 16:50:34.433621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.877 [2024-10-17 16:50:34.433643] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48900, cid 3, qid 0 00:22:20.877 [2024-10-17 16:50:34.433723] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.877 [2024-10-17 16:50:34.433738] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.877 [2024-10-17 16:50:34.433745] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.433751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48900) on tqpair=0x1ce8760 00:22:20.877 [2024-10-17 16:50:34.433769] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.433780] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.433791] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ce8760) 00:22:20.877 [2024-10-17 16:50:34.433802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.877 [2024-10-17 16:50:34.433825] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48900, cid 3, qid 0 00:22:20.877 [2024-10-17 16:50:34.433900] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.877 [2024-10-17 16:50:34.433917] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.877 [2024-10-17 16:50:34.433926] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.433932] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48900) on tqpair=0x1ce8760 00:22:20.877 [2024-10-17 16:50:34.433949] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.433958] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.433968] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ce8760) 00:22:20.877 [2024-10-17 16:50:34.433980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.877 [2024-10-17 16:50:34.434009] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48900, cid 3, qid 0 00:22:20.877 [2024-10-17 16:50:34.434093] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.877 [2024-10-17 16:50:34.434108] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.877 [2024-10-17 16:50:34.434115] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434121] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48900) on tqpair=0x1ce8760 00:22:20.877 [2024-10-17 16:50:34.434140] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434151] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434157] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ce8760) 00:22:20.877 [2024-10-17 16:50:34.434168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.877 [2024-10-17 16:50:34.434190] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48900, cid 3, qid 0 00:22:20.877 [2024-10-17 16:50:34.434269] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.877 [2024-10-17 16:50:34.434284] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.877 [2024-10-17 16:50:34.434291] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434298] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48900) on tqpair=0x1ce8760 00:22:20.877 [2024-10-17 16:50:34.434316] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434327] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434333] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ce8760) 00:22:20.877 [2024-10-17 16:50:34.434344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.877 [2024-10-17 16:50:34.434366] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48900, cid 3, qid 0 00:22:20.877 [2024-10-17 16:50:34.434474] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.877 [2024-10-17 16:50:34.434489] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.877 [2024-10-17 16:50:34.434495] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434502] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48900) on tqpair=0x1ce8760 00:22:20.877 [2024-10-17 16:50:34.434520] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434531] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434537] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ce8760) 00:22:20.877 [2024-10-17 16:50:34.434552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.877 [2024-10-17 16:50:34.434574] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48900, cid 3, qid 0 00:22:20.877 [2024-10-17 16:50:34.434654] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.877 [2024-10-17 16:50:34.434669] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.877 [2024-10-17 16:50:34.434676] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434682] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48900) on tqpair=0x1ce8760 00:22:20.877 [2024-10-17 16:50:34.434701] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434711] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434718] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ce8760) 00:22:20.877 [2024-10-17 16:50:34.434728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.877 [2024-10-17 16:50:34.434751] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48900, cid 3, qid 0 00:22:20.877 [2024-10-17 16:50:34.434859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.877 [2024-10-17 16:50:34.434873] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.877 [2024-10-17 16:50:34.434880] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434887] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48900) on tqpair=0x1ce8760 00:22:20.877 [2024-10-17 16:50:34.434905] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434916] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.434922] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ce8760) 00:22:20.877 [2024-10-17 16:50:34.434933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.877 [2024-10-17 16:50:34.434954] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48900, cid 3, qid 0 00:22:20.877 [2024-10-17 16:50:34.439013] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.877 [2024-10-17 16:50:34.439030] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.877 [2024-10-17 16:50:34.439037] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.439043] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48900) on tqpair=0x1ce8760 00:22:20.877 [2024-10-17 16:50:34.439061] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.439072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.439078] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ce8760) 00:22:20.877 [2024-10-17 16:50:34.439089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.877 [2024-10-17 16:50:34.439111] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d48900, cid 3, qid 0 00:22:20.877 [2024-10-17 16:50:34.439260] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.877 [2024-10-17 16:50:34.439275] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.877 [2024-10-17 16:50:34.439282] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.877 [2024-10-17 16:50:34.439289] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d48900) on tqpair=0x1ce8760 00:22:20.877 [2024-10-17 16:50:34.439303] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:22:20.877 0% 00:22:20.877 Data Units Read: 0 00:22:20.877 Data Units Written: 0 00:22:20.877 Host Read Commands: 0 00:22:20.877 Host Write Commands: 0 00:22:20.877 Controller Busy Time: 0 minutes 00:22:20.877 Power Cycles: 0 00:22:20.877 Power On Hours: 0 hours 00:22:20.877 Unsafe Shutdowns: 0 00:22:20.877 Unrecoverable Media Errors: 0 00:22:20.877 Lifetime Error Log Entries: 0 00:22:20.877 Warning Temperature Time: 0 minutes 00:22:20.877 Critical Temperature Time: 0 minutes 00:22:20.877 00:22:20.877 Number of Queues 00:22:20.877 ================ 00:22:20.877 Number of I/O Submission Queues: 127 00:22:20.877 Number of I/O Completion Queues: 127 00:22:20.877 00:22:20.877 Active Namespaces 00:22:20.877 ================= 00:22:20.877 Namespace ID:1 00:22:20.877 Error Recovery Timeout: Unlimited 00:22:20.877 Command Set Identifier: NVM (00h) 00:22:20.877 Deallocate: Supported 00:22:20.877 Deallocated/Unwritten Error: Not Supported 00:22:20.877 Deallocated Read Value: Unknown 00:22:20.877 Deallocate in Write Zeroes: Not Supported 00:22:20.877 Deallocated Guard Field: 0xFFFF 00:22:20.877 Flush: Supported 00:22:20.877 Reservation: Supported 00:22:20.877 Namespace Sharing Capabilities: Multiple Controllers 00:22:20.877 Size (in LBAs): 131072 (0GiB) 00:22:20.877 Capacity (in LBAs): 131072 (0GiB) 00:22:20.877 Utilization (in LBAs): 131072 (0GiB) 00:22:20.877 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:20.877 EUI64: ABCDEF0123456789 00:22:20.877 UUID: 0b74ab0b-4ba9-4588-be85-52999f175da2 00:22:20.877 Thin Provisioning: Not Supported 00:22:20.877 Per-NS Atomic Units: Yes 00:22:20.877 Atomic Boundary Size (Normal): 0 00:22:20.877 Atomic Boundary Size (PFail): 0 00:22:20.877 Atomic Boundary Offset: 0 00:22:20.878 Maximum Single Source Range Length: 65535 00:22:20.878 Maximum Copy Length: 65535 00:22:20.878 Maximum Source Range Count: 1 00:22:20.878 NGUID/EUI64 Never Reused: No 00:22:20.878 Namespace Write Protected: No 00:22:20.878 Number of LBA Formats: 1 00:22:20.878 Current LBA Format: LBA Format #00 00:22:20.878 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:20.878 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:20.878 rmmod nvme_tcp 00:22:20.878 rmmod nvme_fabrics 00:22:20.878 rmmod nvme_keyring 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 2411887 ']' 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 2411887 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2411887 ']' 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2411887 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2411887 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2411887' 00:22:20.878 killing process with pid 2411887 00:22:20.878 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2411887 00:22:21.137 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2411887 00:22:21.137 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:21.137 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:21.137 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:21.137 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:21.138 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:22:21.138 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:21.138 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:22:21.138 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.138 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.138 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.138 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.138 16:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.672 16:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.672 00:22:23.672 real 0m5.540s 00:22:23.672 user 0m4.535s 00:22:23.672 sys 0m1.973s 00:22:23.672 16:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.672 16:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.672 ************************************ 00:22:23.672 END TEST nvmf_identify 00:22:23.672 ************************************ 00:22:23.672 16:50:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:23.672 16:50:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:23.672 16:50:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:23.672 16:50:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.672 ************************************ 00:22:23.672 START TEST nvmf_perf 00:22:23.672 ************************************ 00:22:23.672 16:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:23.672 * Looking for test storage... 00:22:23.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:23.672 16:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:23.672 16:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:22:23.672 16:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:23.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.672 --rc genhtml_branch_coverage=1 00:22:23.672 --rc genhtml_function_coverage=1 00:22:23.672 --rc genhtml_legend=1 00:22:23.672 --rc geninfo_all_blocks=1 00:22:23.672 --rc geninfo_unexecuted_blocks=1 00:22:23.672 00:22:23.672 ' 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:23.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.672 --rc genhtml_branch_coverage=1 00:22:23.672 --rc genhtml_function_coverage=1 00:22:23.672 --rc genhtml_legend=1 00:22:23.672 --rc geninfo_all_blocks=1 00:22:23.672 --rc geninfo_unexecuted_blocks=1 00:22:23.672 00:22:23.672 ' 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:23.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.672 --rc genhtml_branch_coverage=1 00:22:23.672 --rc genhtml_function_coverage=1 00:22:23.672 --rc genhtml_legend=1 00:22:23.672 --rc geninfo_all_blocks=1 00:22:23.672 --rc geninfo_unexecuted_blocks=1 00:22:23.672 00:22:23.672 ' 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:23.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.672 --rc genhtml_branch_coverage=1 00:22:23.672 --rc genhtml_function_coverage=1 00:22:23.672 --rc genhtml_legend=1 00:22:23.672 --rc geninfo_all_blocks=1 00:22:23.672 --rc geninfo_unexecuted_blocks=1 00:22:23.672 00:22:23.672 ' 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.672 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:23.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.673 16:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:25.577 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:25.577 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:25.577 Found net devices under 0000:09:00.0: cvl_0_0 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:25.577 Found net devices under 0000:09:00.1: cvl_0_1 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:25.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:22:25.577 00:22:25.577 --- 10.0.0.2 ping statistics --- 00:22:25.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.577 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:22:25.577 00:22:25.577 --- 10.0.0.1 ping statistics --- 00:22:25.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.577 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:25.577 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=2413979 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 2413979 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2413979 ']' 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:25.578 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:25.578 [2024-10-17 16:50:39.234914] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:22:25.578 [2024-10-17 16:50:39.235018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.836 [2024-10-17 16:50:39.302581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.836 [2024-10-17 16:50:39.363832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.836 [2024-10-17 16:50:39.363882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.836 [2024-10-17 16:50:39.363910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.836 [2024-10-17 16:50:39.363921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.836 [2024-10-17 16:50:39.363931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.836 [2024-10-17 16:50:39.365473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.837 [2024-10-17 16:50:39.365496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.837 [2024-10-17 16:50:39.365555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.837 [2024-10-17 16:50:39.365559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.837 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.837 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:22:25.837 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:25.837 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.837 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:25.837 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.837 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:25.837 16:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:29.116 16:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:29.116 16:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:29.373 16:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:22:29.373 16:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:29.632 16:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:29.632 16:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:22:29.632 16:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:29.632 16:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:29.632 16:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:29.889 [2024-10-17 16:50:43.507330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.889 16:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.148 16:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:30.148 16:50:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:30.406 16:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:30.406 16:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:30.664 16:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.922 [2024-10-17 16:50:44.595272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.179 16:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:31.437 16:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:22:31.437 16:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:22:31.437 16:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:31.437 16:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:22:32.810 Initializing NVMe Controllers 00:22:32.810 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:22:32.810 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:22:32.810 Initialization complete. Launching workers. 00:22:32.810 ======================================================== 00:22:32.810 Latency(us) 00:22:32.810 Device Information : IOPS MiB/s Average min max 00:22:32.810 PCIE (0000:0b:00.0) NSID 1 from core 0: 86212.81 336.77 370.73 15.58 5321.31 00:22:32.810 ======================================================== 00:22:32.810 Total : 86212.81 336.77 370.73 15.58 5321.31 00:22:32.810 00:22:32.810 16:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:33.742 Initializing NVMe Controllers 00:22:33.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:33.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:33.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:33.742 Initialization complete. Launching workers. 00:22:33.742 ======================================================== 00:22:33.742 Latency(us) 00:22:33.742 Device Information : IOPS MiB/s Average min max 00:22:33.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 137.65 0.54 7279.58 141.59 45270.79 00:22:33.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.86 0.22 17727.55 7960.21 47891.30 00:22:33.742 ======================================================== 00:22:33.742 Total : 194.51 0.76 10333.60 141.59 47891.30 00:22:33.742 00:22:33.742 16:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:35.116 Initializing NVMe Controllers 00:22:35.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:35.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:35.116 Initialization complete. Launching workers. 00:22:35.116 ======================================================== 00:22:35.116 Latency(us) 00:22:35.116 Device Information : IOPS MiB/s Average min max 00:22:35.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8385.64 32.76 3816.38 574.07 7986.06 00:22:35.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3925.83 15.34 8193.79 5581.87 15873.47 00:22:35.116 ======================================================== 00:22:35.116 Total : 12311.47 48.09 5212.23 574.07 15873.47 00:22:35.116 00:22:35.116 16:50:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:35.116 16:50:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:35.116 16:50:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:38.398 Initializing NVMe Controllers 00:22:38.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.398 Controller IO queue size 128, less than required. 00:22:38.398 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.398 Controller IO queue size 128, less than required. 00:22:38.398 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:38.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:38.398 Initialization complete. Launching workers. 00:22:38.398 ======================================================== 00:22:38.398 Latency(us) 00:22:38.398 Device Information : IOPS MiB/s Average min max 00:22:38.398 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1675.47 418.87 76955.33 50567.75 121435.42 00:22:38.398 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 579.64 144.91 232907.30 92899.58 359164.31 00:22:38.398 ======================================================== 00:22:38.398 Total : 2255.12 563.78 117040.48 50567.75 359164.31 00:22:38.398 00:22:38.398 16:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:38.398 No valid NVMe controllers or AIO or URING devices found 00:22:38.398 Initializing NVMe Controllers 00:22:38.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.398 Controller IO queue size 128, less than required. 00:22:38.398 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.398 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:38.398 Controller IO queue size 128, less than required. 00:22:38.399 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.399 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:38.399 WARNING: Some requested NVMe devices were skipped 00:22:38.399 16:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:40.927 Initializing NVMe Controllers 00:22:40.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.928 Controller IO queue size 128, less than required. 00:22:40.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.928 Controller IO queue size 128, less than required. 00:22:40.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:40.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:40.928 Initialization complete. Launching workers. 00:22:40.928 00:22:40.928 ==================== 00:22:40.928 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:40.928 TCP transport: 00:22:40.928 polls: 8630 00:22:40.928 idle_polls: 5407 00:22:40.928 sock_completions: 3223 00:22:40.928 nvme_completions: 5977 00:22:40.928 submitted_requests: 8882 00:22:40.928 queued_requests: 1 00:22:40.928 00:22:40.928 ==================== 00:22:40.928 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:40.928 TCP transport: 00:22:40.928 polls: 8764 00:22:40.928 idle_polls: 5658 00:22:40.928 sock_completions: 3106 00:22:40.928 nvme_completions: 6141 00:22:40.928 submitted_requests: 9308 00:22:40.928 queued_requests: 1 00:22:40.928 ======================================================== 00:22:40.928 Latency(us) 00:22:40.928 Device Information : IOPS MiB/s Average min max 00:22:40.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1490.37 372.59 88321.63 62888.71 151433.25 00:22:40.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1531.27 382.82 84187.46 41768.75 134880.07 00:22:40.928 ======================================================== 00:22:40.928 Total : 3021.64 755.41 86226.57 41768.75 151433.25 00:22:40.928 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.928 rmmod nvme_tcp 00:22:40.928 rmmod nvme_fabrics 00:22:40.928 rmmod nvme_keyring 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 2413979 ']' 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 2413979 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2413979 ']' 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2413979 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2413979 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2413979' 00:22:40.928 killing process with pid 2413979 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2413979 00:22:40.928 16:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2413979 00:22:42.829 16:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:42.829 16:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:42.829 16:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:42.829 16:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:42.829 16:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:22:42.829 16:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:42.829 16:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:22:42.829 16:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:42.829 16:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:42.829 16:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.829 16:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.829 16:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:44.736 00:22:44.736 real 0m21.177s 00:22:44.736 user 1m5.208s 00:22:44.736 sys 0m5.623s 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:44.736 ************************************ 00:22:44.736 END TEST nvmf_perf 00:22:44.736 ************************************ 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.736 ************************************ 00:22:44.736 START TEST nvmf_fio_host 00:22:44.736 ************************************ 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:44.736 * Looking for test storage... 00:22:44.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:44.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.736 --rc genhtml_branch_coverage=1 00:22:44.736 --rc genhtml_function_coverage=1 00:22:44.736 --rc genhtml_legend=1 00:22:44.736 --rc geninfo_all_blocks=1 00:22:44.736 --rc geninfo_unexecuted_blocks=1 00:22:44.736 00:22:44.736 ' 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:44.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.736 --rc genhtml_branch_coverage=1 00:22:44.736 --rc genhtml_function_coverage=1 00:22:44.736 --rc genhtml_legend=1 00:22:44.736 --rc geninfo_all_blocks=1 00:22:44.736 --rc geninfo_unexecuted_blocks=1 00:22:44.736 00:22:44.736 ' 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:44.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.736 --rc genhtml_branch_coverage=1 00:22:44.736 --rc genhtml_function_coverage=1 00:22:44.736 --rc genhtml_legend=1 00:22:44.736 --rc geninfo_all_blocks=1 00:22:44.736 --rc geninfo_unexecuted_blocks=1 00:22:44.736 00:22:44.736 ' 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:44.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.736 --rc genhtml_branch_coverage=1 00:22:44.736 --rc genhtml_function_coverage=1 00:22:44.736 --rc genhtml_legend=1 00:22:44.736 --rc geninfo_all_blocks=1 00:22:44.736 --rc geninfo_unexecuted_blocks=1 00:22:44.736 00:22:44.736 ' 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.736 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:44.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:44.737 16:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:46.638 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:46.638 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:46.638 Found net devices under 0000:09:00.0: cvl_0_0 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:46.638 Found net devices under 0000:09:00.1: cvl_0_1 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:46.638 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:46.639 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.639 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.639 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:46.639 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:46.639 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.639 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.639 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.639 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.639 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:46.639 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:46.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:22:46.897 00:22:46.897 --- 10.0.0.2 ping statistics --- 00:22:46.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.897 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:22:46.897 00:22:46.897 --- 10.0.0.1 ping statistics --- 00:22:46.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.897 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2417957 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2417957 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2417957 ']' 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.897 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.897 [2024-10-17 16:51:00.447435] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:22:46.897 [2024-10-17 16:51:00.447550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.897 [2024-10-17 16:51:00.520366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.897 [2024-10-17 16:51:00.585090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.897 [2024-10-17 16:51:00.585158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.897 [2024-10-17 16:51:00.585177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.897 [2024-10-17 16:51:00.585190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.897 [2024-10-17 16:51:00.585203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.897 [2024-10-17 16:51:00.586894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.897 [2024-10-17 16:51:00.586949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.897 [2024-10-17 16:51:00.587034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.897 [2024-10-17 16:51:00.587037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.155 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:47.155 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:22:47.155 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:47.413 [2024-10-17 16:51:00.960919] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.413 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:47.413 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:47.413 16:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.413 16:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:47.671 Malloc1 00:22:47.671 16:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:48.237 16:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:48.237 16:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.494 [2024-10-17 16:51:02.164803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.752 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:49.010 16:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:49.010 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:49.010 fio-3.35 00:22:49.010 Starting 1 thread 00:22:51.538 00:22:51.538 test: (groupid=0, jobs=1): err= 0: pid=2418418: Thu Oct 17 16:51:05 2024 00:22:51.538 read: IOPS=8742, BW=34.1MiB/s (35.8MB/s)(68.5MiB/2007msec) 00:22:51.538 slat (usec): min=2, max=121, avg= 2.68, stdev= 1.57 00:22:51.538 clat (usec): min=2318, max=13168, avg=7990.49, stdev=646.49 00:22:51.538 lat (usec): min=2340, max=13170, avg=7993.17, stdev=646.41 00:22:51.538 clat percentiles (usec): 00:22:51.538 | 1.00th=[ 6521], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7439], 00:22:51.538 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8160], 00:22:51.539 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 8717], 95.00th=[ 8979], 00:22:51.539 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[11338], 99.95th=[12125], 00:22:51.539 | 99.99th=[12780] 00:22:51.539 bw ( KiB/s): min=34200, max=35472, per=100.00%, avg=34972.00, stdev=547.69, samples=4 00:22:51.539 iops : min= 8550, max= 8868, avg=8743.00, stdev=136.92, samples=4 00:22:51.539 write: IOPS=8740, BW=34.1MiB/s (35.8MB/s)(68.5MiB/2007msec); 0 zone resets 00:22:51.539 slat (nsec): min=2155, max=93537, avg=2802.97, stdev=1317.69 00:22:51.539 clat (usec): min=1004, max=13146, avg=6620.56, stdev=570.40 00:22:51.539 lat (usec): min=1010, max=13149, avg=6623.36, stdev=570.37 00:22:51.539 clat percentiles (usec): 00:22:51.539 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6194], 00:22:51.539 | 30.00th=[ 6390], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:22:51.539 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:22:51.539 | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[11863], 99.95th=[12649], 00:22:51.539 | 99.99th=[13042] 00:22:51.539 bw ( KiB/s): min=34688, max=35200, per=99.97%, avg=34952.00, stdev=231.03, samples=4 00:22:51.539 iops : min= 8672, max= 8800, avg=8738.00, stdev=57.76, samples=4 00:22:51.539 lat (msec) : 2=0.03%, 4=0.11%, 10=99.65%, 20=0.21% 00:22:51.539 cpu : usr=62.86%, sys=35.34%, ctx=79, majf=0, minf=31 00:22:51.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:51.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:51.539 issued rwts: total=17546,17543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.539 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:51.539 00:22:51.539 Run status group 0 (all jobs): 00:22:51.539 READ: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.9MB), run=2007-2007msec 00:22:51.539 WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.9MB), run=2007-2007msec 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:51.539 16:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:51.797 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:51.797 fio-3.35 00:22:51.797 Starting 1 thread 00:22:54.330 00:22:54.330 test: (groupid=0, jobs=1): err= 0: pid=2418753: Thu Oct 17 16:51:07 2024 00:22:54.330 read: IOPS=8299, BW=130MiB/s (136MB/s)(260MiB/2008msec) 00:22:54.330 slat (usec): min=2, max=100, avg= 3.91, stdev= 1.84 00:22:54.330 clat (usec): min=2138, max=16537, avg=8845.67, stdev=2024.60 00:22:54.330 lat (usec): min=2142, max=16540, avg=8849.58, stdev=2024.66 00:22:54.330 clat percentiles (usec): 00:22:54.330 | 1.00th=[ 4817], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 7111], 00:22:54.330 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9241], 00:22:54.330 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11469], 95.00th=[12125], 00:22:54.330 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15926], 99.95th=[16188], 00:22:54.330 | 99.99th=[16450] 00:22:54.330 bw ( KiB/s): min=62496, max=75744, per=51.67%, avg=68616.00, stdev=6504.61, samples=4 00:22:54.330 iops : min= 3906, max= 4734, avg=4288.50, stdev=406.54, samples=4 00:22:54.330 write: IOPS=4902, BW=76.6MiB/s (80.3MB/s)(140MiB/1832msec); 0 zone resets 00:22:54.330 slat (usec): min=30, max=192, avg=35.37, stdev= 6.09 00:22:54.330 clat (usec): min=4428, max=18282, avg=11410.79, stdev=2112.37 00:22:54.330 lat (usec): min=4460, max=18315, avg=11446.16, stdev=2112.45 00:22:54.330 clat percentiles (usec): 00:22:54.330 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9634], 00:22:54.330 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11600], 00:22:54.330 | 70.00th=[12256], 80.00th=[13304], 90.00th=[14615], 95.00th=[15401], 00:22:54.330 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17695], 99.95th=[18220], 00:22:54.330 | 99.99th=[18220] 00:22:54.330 bw ( KiB/s): min=64992, max=79872, per=91.16%, avg=71512.00, stdev=7479.76, samples=4 00:22:54.330 iops : min= 4062, max= 4992, avg=4469.50, stdev=467.49, samples=4 00:22:54.330 lat (msec) : 4=0.08%, 10=56.21%, 20=43.70% 00:22:54.330 cpu : usr=78.72%, sys=20.13%, ctx=29, majf=0, minf=53 00:22:54.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:54.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:54.330 issued rwts: total=16666,8982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.330 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:54.330 00:22:54.330 Run status group 0 (all jobs): 00:22:54.330 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=260MiB (273MB), run=2008-2008msec 00:22:54.330 WRITE: bw=76.6MiB/s (80.3MB/s), 76.6MiB/s-76.6MiB/s (80.3MB/s-80.3MB/s), io=140MiB (147MB), run=1832-1832msec 00:22:54.330 16:51:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.589 rmmod nvme_tcp 00:22:54.589 rmmod nvme_fabrics 00:22:54.589 rmmod nvme_keyring 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 2417957 ']' 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 2417957 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2417957 ']' 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2417957 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2417957 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2417957' 00:22:54.589 killing process with pid 2417957 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2417957 00:22:54.589 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2417957 00:22:54.847 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:54.847 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:54.847 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:54.847 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:54.847 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:22:54.847 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:54.847 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:22:54.847 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.847 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.847 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.847 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.847 16:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.381 00:22:57.381 real 0m12.400s 00:22:57.381 user 0m36.957s 00:22:57.381 sys 0m4.217s 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.381 ************************************ 00:22:57.381 END TEST nvmf_fio_host 00:22:57.381 ************************************ 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.381 ************************************ 00:22:57.381 START TEST nvmf_failover 00:22:57.381 ************************************ 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:57.381 * Looking for test storage... 00:22:57.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:57.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.381 --rc genhtml_branch_coverage=1 00:22:57.381 --rc genhtml_function_coverage=1 00:22:57.381 --rc genhtml_legend=1 00:22:57.381 --rc geninfo_all_blocks=1 00:22:57.381 --rc geninfo_unexecuted_blocks=1 00:22:57.381 00:22:57.381 ' 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:57.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.381 --rc genhtml_branch_coverage=1 00:22:57.381 --rc genhtml_function_coverage=1 00:22:57.381 --rc genhtml_legend=1 00:22:57.381 --rc geninfo_all_blocks=1 00:22:57.381 --rc geninfo_unexecuted_blocks=1 00:22:57.381 00:22:57.381 ' 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:57.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.381 --rc genhtml_branch_coverage=1 00:22:57.381 --rc genhtml_function_coverage=1 00:22:57.381 --rc genhtml_legend=1 00:22:57.381 --rc geninfo_all_blocks=1 00:22:57.381 --rc geninfo_unexecuted_blocks=1 00:22:57.381 00:22:57.381 ' 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:57.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.381 --rc genhtml_branch_coverage=1 00:22:57.381 --rc genhtml_function_coverage=1 00:22:57.381 --rc genhtml_legend=1 00:22:57.381 --rc geninfo_all_blocks=1 00:22:57.381 --rc geninfo_unexecuted_blocks=1 00:22:57.381 00:22:57.381 ' 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:57.381 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:57.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.382 16:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:59.286 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:59.286 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:59.286 Found net devices under 0000:09:00.0: cvl_0_0 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:59.286 Found net devices under 0000:09:00.1: cvl_0_1 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.286 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:22:59.287 00:22:59.287 --- 10.0.0.2 ping statistics --- 00:22:59.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.287 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:22:59.287 00:22:59.287 --- 10.0.0.1 ping statistics --- 00:22:59.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.287 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=2421573 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 2421573 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2421573 ']' 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.287 16:51:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:59.545 [2024-10-17 16:51:13.009654] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:22:59.545 [2024-10-17 16:51:13.009735] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.545 [2024-10-17 16:51:13.073511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:59.545 [2024-10-17 16:51:13.134178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.545 [2024-10-17 16:51:13.134233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.545 [2024-10-17 16:51:13.134246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.545 [2024-10-17 16:51:13.134257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.545 [2024-10-17 16:51:13.134266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.545 [2024-10-17 16:51:13.135730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.545 [2024-10-17 16:51:13.135796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.545 [2024-10-17 16:51:13.135792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.830 16:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.830 16:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:59.830 16:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:59.830 16:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:59.830 16:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:59.830 16:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.830 16:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:00.113 [2024-10-17 16:51:13.542795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.113 16:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:00.370 Malloc0 00:23:00.371 16:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:00.628 16:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:00.885 16:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.143 [2024-10-17 16:51:14.647329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.143 16:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:01.401 [2024-10-17 16:51:14.912051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:01.401 16:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:01.660 [2024-10-17 16:51:15.189028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:01.660 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2421868 00:23:01.660 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:01.660 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.660 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2421868 /var/tmp/bdevperf.sock 00:23:01.660 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2421868 ']' 00:23:01.660 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.660 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:01.660 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.660 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:01.660 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:01.918 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:01.918 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:01.918 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:02.483 NVMe0n1 00:23:02.483 16:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:02.741 00:23:02.741 16:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2422000 00:23:02.741 16:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:02.741 16:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:03.676 16:51:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.937 [2024-10-17 16:51:17.551541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.551975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 [2024-10-17 16:51:17.552242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2440 is same with the state(6) to be set 00:23:03.937 16:51:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:07.219 16:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:07.477 00:23:07.477 16:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:07.735 16:51:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:11.016 16:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.016 [2024-10-17 16:51:24.586264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.016 16:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:11.951 16:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:12.209 [2024-10-17 16:51:25.871372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12882e0 is same with the state(6) to be set 00:23:12.209 [2024-10-17 16:51:25.871432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12882e0 is same with the state(6) to be set 00:23:12.209 [2024-10-17 16:51:25.871462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12882e0 is same with the state(6) to be set 00:23:12.209 [2024-10-17 16:51:25.871474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12882e0 is same with the state(6) to be set 00:23:12.209 [2024-10-17 16:51:25.871486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12882e0 is same with the state(6) to be set 00:23:12.209 [2024-10-17 16:51:25.871497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12882e0 is same with the state(6) to be set 00:23:12.209 [2024-10-17 16:51:25.871509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12882e0 is same with the state(6) to be set 00:23:12.209 16:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2422000 00:23:18.780 { 00:23:18.780 "results": [ 00:23:18.780 { 00:23:18.780 "job": "NVMe0n1", 00:23:18.780 "core_mask": "0x1", 00:23:18.780 "workload": "verify", 00:23:18.780 "status": "finished", 00:23:18.780 "verify_range": { 00:23:18.780 "start": 0, 00:23:18.780 "length": 16384 00:23:18.780 }, 00:23:18.780 "queue_depth": 128, 00:23:18.780 "io_size": 4096, 00:23:18.780 "runtime": 15.007016, 00:23:18.780 "iops": 8404.46894972325, 00:23:18.780 "mibps": 32.829956834856446, 00:23:18.780 "io_failed": 10093, 00:23:18.780 "io_timeout": 0, 00:23:18.780 "avg_latency_us": 14073.726749099285, 00:23:18.780 "min_latency_us": 582.5422222222222, 00:23:18.780 "max_latency_us": 21165.70074074074 00:23:18.780 } 00:23:18.780 ], 00:23:18.780 "core_count": 1 00:23:18.780 } 00:23:18.780 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2421868 00:23:18.780 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2421868 ']' 00:23:18.780 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2421868 00:23:18.780 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:18.780 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.780 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2421868 00:23:18.780 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:18.780 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:18.780 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2421868' 00:23:18.780 killing process with pid 2421868 00:23:18.780 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2421868 00:23:18.780 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2421868 00:23:18.780 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:18.780 [2024-10-17 16:51:15.257124] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:23:18.780 [2024-10-17 16:51:15.257208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421868 ] 00:23:18.780 [2024-10-17 16:51:15.315770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.780 [2024-10-17 16:51:15.376177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.780 Running I/O for 15 seconds... 00:23:18.780 8509.00 IOPS, 33.24 MiB/s [2024-10-17T14:51:32.470Z] [2024-10-17 16:51:17.552866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.552904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.552930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.552946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.552962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.552993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.780 [2024-10-17 16:51:17.553439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.780 [2024-10-17 16:51:17.553454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.553467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.553494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.553521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.553548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.553977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.553993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.554597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.554980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.554998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.555065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.781 [2024-10-17 16:51:17.555665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.555693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.555719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.781 [2024-10-17 16:51:17.555733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.781 [2024-10-17 16:51:17.555753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.555767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.555780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.555794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.555807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.555825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.555839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.555853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.555866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.555881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.555894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.555908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.555921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.555935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.555948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.555963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.555976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:17.556636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.782 [2024-10-17 16:51:17.556690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79664 len:8 PRP1 0x0 PRP2 0x0 00:23:18.782 [2024-10-17 16:51:17.556704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.782 [2024-10-17 16:51:17.556733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.782 [2024-10-17 16:51:17.556749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79672 len:8 PRP1 0x0 PRP2 0x0 00:23:18.782 [2024-10-17 16:51:17.556763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556819] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18bd6e0 was disconnected and freed. reset controller. 00:23:18.782 [2024-10-17 16:51:17.556837] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:18.782 [2024-10-17 16:51:17.556869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.782 [2024-10-17 16:51:17.556887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.782 [2024-10-17 16:51:17.556916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.782 [2024-10-17 16:51:17.556943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.556956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.782 [2024-10-17 16:51:17.556969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:17.557009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.782 [2024-10-17 16:51:17.557072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189c620 (9): Bad file descriptor 00:23:18.782 [2024-10-17 16:51:17.560285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:18.782 [2024-10-17 16:51:17.636859] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:18.782 8164.00 IOPS, 31.89 MiB/s [2024-10-17T14:51:32.472Z] 8296.33 IOPS, 32.41 MiB/s [2024-10-17T14:51:32.472Z] 8376.75 IOPS, 32.72 MiB/s [2024-10-17T14:51:32.472Z] [2024-10-17 16:51:21.306732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.306794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.306819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.306845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.306861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.306875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.306890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.306903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.306917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.306931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.306945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.306959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.306973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.306986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.782 [2024-10-17 16:51:21.307068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.782 [2024-10-17 16:51:21.307658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.782 [2024-10-17 16:51:21.307671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.307685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.307698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.307712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.307725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.307739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.307751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.307766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.307779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.307793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.307806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.307820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.307833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.307847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.307860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.307874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.307887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.307901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.307914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.307928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.307945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.307959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.307972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.307986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.783 [2024-10-17 16:51:21.308875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.308923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91520 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.308935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.308994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.783 [2024-10-17 16:51:21.309023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.783 [2024-10-17 16:51:21.309052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.783 [2024-10-17 16:51:21.309078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.783 [2024-10-17 16:51:21.309104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189c620 is same with the state(6) to be set 00:23:18.783 [2024-10-17 16:51:21.309291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91528 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91536 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91544 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91552 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91560 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91568 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91576 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91584 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91592 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91600 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91608 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91616 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91624 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91632 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.309961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.309971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.309982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91640 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.309994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.310018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.310031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.310042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91648 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.310055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.310067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.310078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.310089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91656 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.310101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.310114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.310124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.310136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91664 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.310148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.310160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.783 [2024-10-17 16:51:21.310171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.783 [2024-10-17 16:51:21.310182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91672 len:8 PRP1 0x0 PRP2 0x0 00:23:18.783 [2024-10-17 16:51:21.310194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.783 [2024-10-17 16:51:21.310207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91680 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91688 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91696 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92528 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91704 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91712 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91720 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91728 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91736 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91744 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91752 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91760 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91768 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91776 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91784 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91792 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.310957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.310970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.310981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.310992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91800 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91808 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91816 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91824 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91832 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91840 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91848 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91856 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91864 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91872 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91880 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91888 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91896 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91904 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91912 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91920 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91928 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91936 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91944 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91952 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.311958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91960 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.311970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.311982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.311993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.312010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91968 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.312023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.312036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.312046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.312057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91976 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.312069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.312081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.312091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.312102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91984 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.312114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.312126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.312136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.312147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91992 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.312159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.312171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.312181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.312192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92000 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.312203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.312216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.312226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.312241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92008 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.312253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.312265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.312276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.312286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92016 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.312298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.312311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.312321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.312332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91512 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.312344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.312356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.312366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.312376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92024 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.312389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.312401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.312413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.312424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92032 len:8 PRP1 0x0 PRP2 0x0 00:23:18.784 [2024-10-17 16:51:21.312436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.784 [2024-10-17 16:51:21.312449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.784 [2024-10-17 16:51:21.312460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.784 [2024-10-17 16:51:21.312471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92040 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.312484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.312496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.312507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.312518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92048 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.312530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.312543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.312553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.312565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92056 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.312578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.312590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.312604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.312616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92064 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.312628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.312641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.312652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.312663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92072 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.312675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.312688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.312699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.312710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92080 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.312723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.312735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.312746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.312757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92088 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.312770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.312784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.312795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.312811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92096 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.312824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.312837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.312848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.312860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92104 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.312873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.312886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.312896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.312908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92112 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.312920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.312933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.312943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.312954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92120 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.312967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.312983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.312995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.313015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92128 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.313029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.313044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.319406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.319436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92136 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.319454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.319470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.319482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.319493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92144 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.319507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.319520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.319531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.319543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92152 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.319556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.319571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.319582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.319594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92160 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.319607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.319620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.319631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.319642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92168 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.319655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.319668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.319680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.319690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92176 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.319703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.319716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.319726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.319737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92184 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.319756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.319769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.319780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.319791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92192 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.319804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.319816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.319827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.319837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92200 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.319849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.319861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.319871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.319882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92208 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.319894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.319907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.319918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.319928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92216 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.319940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.319952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.319963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.319974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92224 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.319986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.319999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92232 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92240 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92248 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92256 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92264 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92272 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92280 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92288 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92296 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92304 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92312 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92320 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92328 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92336 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92344 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92352 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92360 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92368 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92376 len:8 PRP1 0x0 PRP2 0x0 00:23:18.785 [2024-10-17 16:51:21.320886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.785 [2024-10-17 16:51:21.320898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.785 [2024-10-17 16:51:21.320909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.785 [2024-10-17 16:51:21.320920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92384 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.320931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.320944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.320954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.320965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92392 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.320977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.320990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92400 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92408 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92416 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92424 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92432 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92440 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92448 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92456 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92464 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92472 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92480 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92488 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92496 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92504 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92512 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92520 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.786 [2024-10-17 16:51:21.321765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.786 [2024-10-17 16:51:21.321776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91520 len:8 PRP1 0x0 PRP2 0x0 00:23:18.786 [2024-10-17 16:51:21.321789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:21.321852] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18bf7d0 was disconnected and freed. reset controller. 00:23:18.786 [2024-10-17 16:51:21.321870] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:18.786 [2024-10-17 16:51:21.321885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.786 [2024-10-17 16:51:21.321940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189c620 (9): Bad file descriptor 00:23:18.786 [2024-10-17 16:51:21.325155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:18.786 [2024-10-17 16:51:21.356808] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:18.786 8324.60 IOPS, 32.52 MiB/s [2024-10-17T14:51:32.476Z] 8356.00 IOPS, 32.64 MiB/s [2024-10-17T14:51:32.476Z] 8401.57 IOPS, 32.82 MiB/s [2024-10-17T14:51:32.476Z] 8430.25 IOPS, 32.93 MiB/s [2024-10-17T14:51:32.476Z] 8448.78 IOPS, 33.00 MiB/s [2024-10-17T14:51:32.476Z] [2024-10-17 16:51:25.872272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.872314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.872380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.872410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.872438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.872465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.872492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.872519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.872546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.872573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.872980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.872995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.873031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.873063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.873092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.873125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.873153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.873182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.873210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.873238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.786 [2024-10-17 16:51:25.873266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.786 [2024-10-17 16:51:25.873629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.786 [2024-10-17 16:51:25.873641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.873668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.873694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.873721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.873748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.873775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.873802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.873837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.873865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.873892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.873919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.873946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.873973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.873988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.787 [2024-10-17 16:51:25.874659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.874686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.874713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.874741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.874768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.874795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.874823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.874851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.874878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.874906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.874937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.874964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.874979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.787 [2024-10-17 16:51:25.875479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.787 [2024-10-17 16:51:25.875527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18408 len:8 PRP1 0x0 PRP2 0x0 00:23:18.787 [2024-10-17 16:51:25.875539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.787 [2024-10-17 16:51:25.875566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.787 [2024-10-17 16:51:25.875577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18416 len:8 PRP1 0x0 PRP2 0x0 00:23:18.787 [2024-10-17 16:51:25.875589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.787 [2024-10-17 16:51:25.875611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.787 [2024-10-17 16:51:25.875621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18424 len:8 PRP1 0x0 PRP2 0x0 00:23:18.787 [2024-10-17 16:51:25.875633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.787 [2024-10-17 16:51:25.875655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.787 [2024-10-17 16:51:25.875665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:8 PRP1 0x0 PRP2 0x0 00:23:18.787 [2024-10-17 16:51:25.875692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.787 [2024-10-17 16:51:25.875721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.787 [2024-10-17 16:51:25.875732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18440 len:8 PRP1 0x0 PRP2 0x0 00:23:18.787 [2024-10-17 16:51:25.875744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.787 [2024-10-17 16:51:25.875766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.787 [2024-10-17 16:51:25.875777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18448 len:8 PRP1 0x0 PRP2 0x0 00:23:18.787 [2024-10-17 16:51:25.875789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.787 [2024-10-17 16:51:25.875801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.787 [2024-10-17 16:51:25.875812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.787 [2024-10-17 16:51:25.875823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18456 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.875834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.875847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.788 [2024-10-17 16:51:25.875857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.788 [2024-10-17 16:51:25.875868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.875880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.875892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.788 [2024-10-17 16:51:25.875902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.788 [2024-10-17 16:51:25.875914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18472 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.875926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.875938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.788 [2024-10-17 16:51:25.875948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.788 [2024-10-17 16:51:25.875959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18480 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.875971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.875983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.788 [2024-10-17 16:51:25.875993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.788 [2024-10-17 16:51:25.876011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18488 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.876024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.788 [2024-10-17 16:51:25.876048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.788 [2024-10-17 16:51:25.876059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17928 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.876075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.788 [2024-10-17 16:51:25.876098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.788 [2024-10-17 16:51:25.876109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17936 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.876121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.788 [2024-10-17 16:51:25.876143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.788 [2024-10-17 16:51:25.876154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17944 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.876168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.788 [2024-10-17 16:51:25.876193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.788 [2024-10-17 16:51:25.876204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.876217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.788 [2024-10-17 16:51:25.876240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.788 [2024-10-17 16:51:25.876251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17960 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.876264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.788 [2024-10-17 16:51:25.876288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.788 [2024-10-17 16:51:25.876299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17968 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.876311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.788 [2024-10-17 16:51:25.876335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.788 [2024-10-17 16:51:25.876346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17976 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.876358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:18.788 [2024-10-17 16:51:25.876383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:18.788 [2024-10-17 16:51:25.876394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:8 PRP1 0x0 PRP2 0x0 00:23:18.788 [2024-10-17 16:51:25.876407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876464] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18bf490 was disconnected and freed. reset controller. 00:23:18.788 [2024-10-17 16:51:25.876482] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:18.788 [2024-10-17 16:51:25.876521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.788 [2024-10-17 16:51:25.876540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.788 [2024-10-17 16:51:25.876568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.788 [2024-10-17 16:51:25.876595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.788 [2024-10-17 16:51:25.876623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.788 [2024-10-17 16:51:25.876636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.788 [2024-10-17 16:51:25.879863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:18.788 [2024-10-17 16:51:25.879903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189c620 (9): Bad file descriptor 00:23:18.788 [2024-10-17 16:51:26.030168] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:18.788 8329.90 IOPS, 32.54 MiB/s [2024-10-17T14:51:32.478Z] 8352.36 IOPS, 32.63 MiB/s [2024-10-17T14:51:32.478Z] 8364.83 IOPS, 32.68 MiB/s [2024-10-17T14:51:32.478Z] 8377.08 IOPS, 32.72 MiB/s [2024-10-17T14:51:32.478Z] 8393.50 IOPS, 32.79 MiB/s [2024-10-17T14:51:32.478Z] 8402.13 IOPS, 32.82 MiB/s 00:23:18.788 Latency(us) 00:23:18.788 [2024-10-17T14:51:32.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.788 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:18.788 Verification LBA range: start 0x0 length 0x4000 00:23:18.788 NVMe0n1 : 15.01 8404.47 32.83 672.55 0.00 14073.73 582.54 21165.70 00:23:18.788 [2024-10-17T14:51:32.478Z] =================================================================================================================== 00:23:18.788 [2024-10-17T14:51:32.478Z] Total : 8404.47 32.83 672.55 0.00 14073.73 582.54 21165.70 00:23:18.788 Received shutdown signal, test time was about 15.000000 seconds 00:23:18.788 00:23:18.788 Latency(us) 00:23:18.788 [2024-10-17T14:51:32.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.788 [2024-10-17T14:51:32.478Z] =================================================================================================================== 00:23:18.788 [2024-10-17T14:51:32.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2423725 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2423725 /var/tmp/bdevperf.sock 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2423725 ']' 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:18.788 16:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:18.788 [2024-10-17 16:51:32.232292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:18.788 16:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:19.045 [2024-10-17 16:51:32.493027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:19.045 16:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:19.608 NVMe0n1 00:23:19.608 16:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:19.864 00:23:19.864 16:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:20.121 00:23:20.121 16:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.121 16:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:20.473 16:51:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.730 16:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:24.005 16:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.005 16:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:24.006 16:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2424397 00:23:24.006 16:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:24.006 16:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2424397 00:23:25.379 { 00:23:25.379 "results": [ 00:23:25.379 { 00:23:25.379 "job": "NVMe0n1", 00:23:25.379 "core_mask": "0x1", 00:23:25.379 "workload": "verify", 00:23:25.379 "status": "finished", 00:23:25.379 "verify_range": { 00:23:25.379 "start": 0, 00:23:25.379 "length": 16384 00:23:25.379 }, 00:23:25.379 "queue_depth": 128, 00:23:25.379 "io_size": 4096, 00:23:25.379 "runtime": 1.006572, 00:23:25.379 "iops": 8559.745353536558, 00:23:25.379 "mibps": 33.43650528725218, 00:23:25.379 "io_failed": 0, 00:23:25.379 "io_timeout": 0, 00:23:25.379 "avg_latency_us": 14884.306767082775, 00:23:25.379 "min_latency_us": 3373.8903703703704, 00:23:25.379 "max_latency_us": 12815.92888888889 00:23:25.379 } 00:23:25.379 ], 00:23:25.379 "core_count": 1 00:23:25.379 } 00:23:25.379 16:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:25.379 [2024-10-17 16:51:31.745962] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:23:25.379 [2024-10-17 16:51:31.746079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2423725 ] 00:23:25.379 [2024-10-17 16:51:31.806396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.379 [2024-10-17 16:51:31.863372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.379 [2024-10-17 16:51:34.242657] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:25.379 [2024-10-17 16:51:34.242756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.379 [2024-10-17 16:51:34.242780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.379 [2024-10-17 16:51:34.242798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.379 [2024-10-17 16:51:34.242812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.379 [2024-10-17 16:51:34.242826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.379 [2024-10-17 16:51:34.242840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.379 [2024-10-17 16:51:34.242853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.379 [2024-10-17 16:51:34.242867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.379 [2024-10-17 16:51:34.242881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:25.379 [2024-10-17 16:51:34.242926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:25.379 [2024-10-17 16:51:34.242958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178b620 (9): Bad file descriptor 00:23:25.379 [2024-10-17 16:51:34.247815] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:25.379 Running I/O for 1 seconds... 00:23:25.379 8488.00 IOPS, 33.16 MiB/s 00:23:25.379 Latency(us) 00:23:25.379 [2024-10-17T14:51:39.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.379 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:25.379 Verification LBA range: start 0x0 length 0x4000 00:23:25.379 NVMe0n1 : 1.01 8559.75 33.44 0.00 0.00 14884.31 3373.89 12815.93 00:23:25.379 [2024-10-17T14:51:39.069Z] =================================================================================================================== 00:23:25.379 [2024-10-17T14:51:39.069Z] Total : 8559.75 33.44 0.00 0.00 14884.31 3373.89 12815.93 00:23:25.379 16:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.379 16:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:25.380 16:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.945 16:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.945 16:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:26.203 16:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.462 16:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:29.741 16:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.741 16:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:29.742 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2423725 00:23:29.742 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2423725 ']' 00:23:29.742 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2423725 00:23:29.742 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:29.742 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.742 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2423725 00:23:29.742 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:29.742 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:29.742 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2423725' 00:23:29.742 killing process with pid 2423725 00:23:29.742 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2423725 00:23:29.742 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2423725 00:23:30.000 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:30.000 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.258 rmmod nvme_tcp 00:23:30.258 rmmod nvme_fabrics 00:23:30.258 rmmod nvme_keyring 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 2421573 ']' 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 2421573 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2421573 ']' 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2421573 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2421573 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2421573' 00:23:30.258 killing process with pid 2421573 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2421573 00:23:30.258 16:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2421573 00:23:30.518 16:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:30.518 16:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:30.518 16:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:30.518 16:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:30.518 16:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:23:30.518 16:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:30.518 16:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:23:30.518 16:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.518 16:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.518 16:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.518 16:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.518 16:51:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.052 00:23:33.052 real 0m35.588s 00:23:33.052 user 2m6.024s 00:23:33.052 sys 0m5.731s 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.052 ************************************ 00:23:33.052 END TEST nvmf_failover 00:23:33.052 ************************************ 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.052 ************************************ 00:23:33.052 START TEST nvmf_host_discovery 00:23:33.052 ************************************ 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:33.052 * Looking for test storage... 00:23:33.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.052 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:33.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.053 --rc genhtml_branch_coverage=1 00:23:33.053 --rc genhtml_function_coverage=1 00:23:33.053 --rc genhtml_legend=1 00:23:33.053 --rc geninfo_all_blocks=1 00:23:33.053 --rc geninfo_unexecuted_blocks=1 00:23:33.053 00:23:33.053 ' 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:33.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.053 --rc genhtml_branch_coverage=1 00:23:33.053 --rc genhtml_function_coverage=1 00:23:33.053 --rc genhtml_legend=1 00:23:33.053 --rc geninfo_all_blocks=1 00:23:33.053 --rc geninfo_unexecuted_blocks=1 00:23:33.053 00:23:33.053 ' 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:33.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.053 --rc genhtml_branch_coverage=1 00:23:33.053 --rc genhtml_function_coverage=1 00:23:33.053 --rc genhtml_legend=1 00:23:33.053 --rc geninfo_all_blocks=1 00:23:33.053 --rc geninfo_unexecuted_blocks=1 00:23:33.053 00:23:33.053 ' 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:33.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.053 --rc genhtml_branch_coverage=1 00:23:33.053 --rc genhtml_function_coverage=1 00:23:33.053 --rc genhtml_legend=1 00:23:33.053 --rc geninfo_all_blocks=1 00:23:33.053 --rc geninfo_unexecuted_blocks=1 00:23:33.053 00:23:33.053 ' 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.053 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:34.958 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:34.958 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:34.958 Found net devices under 0000:09:00.0: cvl_0_0 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:34.958 Found net devices under 0000:09:00.1: cvl_0_1 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:34.958 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:34.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:23:34.958 00:23:34.958 --- 10.0.0.2 ping statistics --- 00:23:34.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.958 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:23:34.959 00:23:34.959 --- 10.0.0.1 ping statistics --- 00:23:34.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.959 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=2427128 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 2427128 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2427128 ']' 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:34.959 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.217 [2024-10-17 16:51:48.694750] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:23:35.217 [2024-10-17 16:51:48.694832] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.217 [2024-10-17 16:51:48.763210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.217 [2024-10-17 16:51:48.826834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.217 [2024-10-17 16:51:48.826896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.217 [2024-10-17 16:51:48.826923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.217 [2024-10-17 16:51:48.826936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.217 [2024-10-17 16:51:48.826948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.217 [2024-10-17 16:51:48.827706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.476 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.476 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:35.476 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.477 [2024-10-17 16:51:48.969925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.477 [2024-10-17 16:51:48.978148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.477 null0 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.477 null1 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.477 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.477 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.477 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2427162 00:23:35.477 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:35.477 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2427162 /tmp/host.sock 00:23:35.477 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2427162 ']' 00:23:35.477 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:23:35.477 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.477 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:35.477 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:35.477 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.477 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.477 [2024-10-17 16:51:49.057751] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:23:35.477 [2024-10-17 16:51:49.057821] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427162 ] 00:23:35.477 [2024-10-17 16:51:49.122977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.736 [2024-10-17 16:51:49.189008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:35.736 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.995 [2024-10-17 16:51:49.587770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:35.995 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.253 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.253 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:36.253 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:36.253 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:36.253 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:23:36.254 16:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:36.819 [2024-10-17 16:51:50.367715] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:36.819 [2024-10-17 16:51:50.367762] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:36.819 [2024-10-17 16:51:50.367791] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:36.819 [2024-10-17 16:51:50.455061] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:37.077 [2024-10-17 16:51:50.558742] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:37.077 [2024-10-17 16:51:50.558765] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:37.335 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.336 16:51:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.336 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.595 [2024-10-17 16:51:51.043974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:37.595 [2024-10-17 16:51:51.044330] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:37.595 [2024-10-17 16:51:51.044369] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:37.595 16:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:37.595 [2024-10-17 16:51:51.171196] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:37.595 [2024-10-17 16:51:51.272227] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:37.595 [2024-10-17 16:51:51.272249] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:37.595 [2024-10-17 16:51:51.272258] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.574 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.876 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:38.876 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:38.876 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:38.876 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:38.876 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:38.876 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.876 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.876 [2024-10-17 16:51:52.260114] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:38.876 [2024-10-17 16:51:52.260159] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:38.876 [2024-10-17 16:51:52.260892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.876 [2024-10-17 16:51:52.260927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.876 [2024-10-17 16:51:52.260945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.876 [2024-10-17 16:51:52.260961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.876 [2024-10-17 16:51:52.260977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.876 [2024-10-17 16:51:52.260992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.876 [2024-10-17 16:51:52.261017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.876 [2024-10-17 16:51:52.261047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.876 [2024-10-17 16:51:52.261061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a610 is same with the state(6) to be set 00:23:38.876 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.876 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:38.876 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:38.876 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.877 [2024-10-17 16:51:52.270888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65a610 (9): Bad file descriptor 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.877 [2024-10-17 16:51:52.280935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.877 [2024-10-17 16:51:52.281123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.877 [2024-10-17 16:51:52.281154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65a610 with addr=10.0.0.2, port=4420 00:23:38.877 [2024-10-17 16:51:52.281172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a610 is same with the state(6) to be set 00:23:38.877 [2024-10-17 16:51:52.281196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65a610 (9): Bad file descriptor 00:23:38.877 [2024-10-17 16:51:52.281230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.877 [2024-10-17 16:51:52.281249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.877 [2024-10-17 16:51:52.281265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.877 [2024-10-17 16:51:52.281286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.877 [2024-10-17 16:51:52.291023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.877 [2024-10-17 16:51:52.291164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.877 [2024-10-17 16:51:52.291193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65a610 with addr=10.0.0.2, port=4420 00:23:38.877 [2024-10-17 16:51:52.291210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a610 is same with the state(6) to be set 00:23:38.877 [2024-10-17 16:51:52.291232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65a610 (9): Bad file descriptor 00:23:38.877 [2024-10-17 16:51:52.291265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.877 [2024-10-17 16:51:52.291283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.877 [2024-10-17 16:51:52.291312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.877 [2024-10-17 16:51:52.291335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.877 [2024-10-17 16:51:52.301114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.877 [2024-10-17 16:51:52.301268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.877 [2024-10-17 16:51:52.301297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65a610 with addr=10.0.0.2, port=4420 00:23:38.877 [2024-10-17 16:51:52.301314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a610 is same with the state(6) to be set 00:23:38.877 [2024-10-17 16:51:52.301337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65a610 (9): Bad file descriptor 00:23:38.877 [2024-10-17 16:51:52.301370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.877 [2024-10-17 16:51:52.301388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.877 [2024-10-17 16:51:52.301401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.877 [2024-10-17 16:51:52.301421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.877 [2024-10-17 16:51:52.311196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:38.877 [2024-10-17 16:51:52.311349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.877 [2024-10-17 16:51:52.311395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65a610 with addr=10.0.0.2, port=4420 00:23:38.877 [2024-10-17 16:51:52.311413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a610 is same with the state(6) to be set 00:23:38.877 [2024-10-17 16:51:52.311436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65a610 (9): Bad file descriptor 00:23:38.877 [2024-10-17 16:51:52.311468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:38.877 [2024-10-17 16:51:52.311485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.877 [2024-10-17 16:51:52.311501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.877 [2024-10-17 16:51:52.311520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.877 [2024-10-17 16:51:52.321273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.877 [2024-10-17 16:51:52.321516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.877 [2024-10-17 16:51:52.321551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65a610 with addr=10.0.0.2, port=4420 00:23:38.877 [2024-10-17 16:51:52.321568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a610 is same with the state(6) to be set 00:23:38.877 [2024-10-17 16:51:52.321591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65a610 (9): Bad file descriptor 00:23:38.877 [2024-10-17 16:51:52.321612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.877 [2024-10-17 16:51:52.321626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.877 [2024-10-17 16:51:52.321639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.877 [2024-10-17 16:51:52.321659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.877 [2024-10-17 16:51:52.331374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.877 [2024-10-17 16:51:52.331554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.877 [2024-10-17 16:51:52.331582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65a610 with addr=10.0.0.2, port=4420 00:23:38.877 [2024-10-17 16:51:52.331599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a610 is same with the state(6) to be set 00:23:38.877 [2024-10-17 16:51:52.331621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65a610 (9): Bad file descriptor 00:23:38.877 [2024-10-17 16:51:52.331641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.877 [2024-10-17 16:51:52.331655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.877 [2024-10-17 16:51:52.331669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.877 [2024-10-17 16:51:52.331688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.877 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.877 [2024-10-17 16:51:52.341451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:38.877 [2024-10-17 16:51:52.341622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.877 [2024-10-17 16:51:52.341651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65a610 with addr=10.0.0.2, port=4420 00:23:38.877 [2024-10-17 16:51:52.341667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a610 is same with the state(6) to be set 00:23:38.877 [2024-10-17 16:51:52.341690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65a610 (9): Bad file descriptor 00:23:38.877 [2024-10-17 16:51:52.341711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.877 [2024-10-17 16:51:52.341725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.878 [2024-10-17 16:51:52.341738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.878 [2024-10-17 16:51:52.341757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.878 [2024-10-17 16:51:52.346802] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:38.878 [2024-10-17 16:51:52.346829] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.878 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.136 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:39.136 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:39.136 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:39.137 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:39.137 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:39.137 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.137 16:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.071 [2024-10-17 16:51:53.585813] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:40.071 [2024-10-17 16:51:53.585841] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:40.071 [2024-10-17 16:51:53.585867] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:40.071 [2024-10-17 16:51:53.672154] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:40.071 [2024-10-17 16:51:53.733887] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:40.071 [2024-10-17 16:51:53.733927] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.071 request: 00:23:40.071 { 00:23:40.071 "name": "nvme", 00:23:40.071 "trtype": "tcp", 00:23:40.071 "traddr": "10.0.0.2", 00:23:40.071 "adrfam": "ipv4", 00:23:40.071 "trsvcid": "8009", 00:23:40.071 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:40.071 "wait_for_attach": true, 00:23:40.071 "method": "bdev_nvme_start_discovery", 00:23:40.071 "req_id": 1 00:23:40.071 } 00:23:40.071 Got JSON-RPC error response 00:23:40.071 response: 00:23:40.071 { 00:23:40.071 "code": -17, 00:23:40.071 "message": "File exists" 00:23:40.071 } 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:40.071 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.330 request: 00:23:40.330 { 00:23:40.330 "name": "nvme_second", 00:23:40.330 "trtype": "tcp", 00:23:40.330 "traddr": "10.0.0.2", 00:23:40.330 "adrfam": "ipv4", 00:23:40.330 "trsvcid": "8009", 00:23:40.330 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:40.330 "wait_for_attach": true, 00:23:40.330 "method": "bdev_nvme_start_discovery", 00:23:40.330 "req_id": 1 00:23:40.330 } 00:23:40.330 Got JSON-RPC error response 00:23:40.330 response: 00:23:40.330 { 00:23:40.330 "code": -17, 00:23:40.330 "message": "File exists" 00:23:40.330 } 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:40.330 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.331 16:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.266 [2024-10-17 16:51:54.937368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.266 [2024-10-17 16:51:54.937454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x667340 with addr=10.0.0.2, port=8010 00:23:41.266 [2024-10-17 16:51:54.937491] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:41.266 [2024-10-17 16:51:54.937518] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:41.266 [2024-10-17 16:51:54.937542] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:42.639 [2024-10-17 16:51:55.939714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.639 [2024-10-17 16:51:55.939751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x667340 with addr=10.0.0.2, port=8010 00:23:42.639 [2024-10-17 16:51:55.939774] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:42.639 [2024-10-17 16:51:55.939786] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:42.639 [2024-10-17 16:51:55.939798] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:43.573 [2024-10-17 16:51:56.941994] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:43.573 request: 00:23:43.573 { 00:23:43.573 "name": "nvme_second", 00:23:43.573 "trtype": "tcp", 00:23:43.573 "traddr": "10.0.0.2", 00:23:43.573 "adrfam": "ipv4", 00:23:43.573 "trsvcid": "8010", 00:23:43.573 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:43.573 "wait_for_attach": false, 00:23:43.573 "attach_timeout_ms": 3000, 00:23:43.573 "method": "bdev_nvme_start_discovery", 00:23:43.573 "req_id": 1 00:23:43.573 } 00:23:43.573 Got JSON-RPC error response 00:23:43.573 response: 00:23:43.573 { 00:23:43.573 "code": -110, 00:23:43.573 "message": "Connection timed out" 00:23:43.573 } 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2427162 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.573 16:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.573 rmmod nvme_tcp 00:23:43.573 rmmod nvme_fabrics 00:23:43.573 rmmod nvme_keyring 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 2427128 ']' 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 2427128 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2427128 ']' 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2427128 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2427128 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2427128' 00:23:43.573 killing process with pid 2427128 00:23:43.573 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2427128 00:23:43.574 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2427128 00:23:43.833 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:43.833 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:43.833 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:43.833 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:43.833 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:23:43.833 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:43.833 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:23:43.833 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.833 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.833 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.833 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.833 16:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.739 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:45.739 00:23:45.739 real 0m13.167s 00:23:45.739 user 0m18.851s 00:23:45.739 sys 0m2.843s 00:23:45.739 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:45.739 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.739 ************************************ 00:23:45.739 END TEST nvmf_host_discovery 00:23:45.739 ************************************ 00:23:45.739 16:51:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:45.739 16:51:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:45.739 16:51:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:45.739 16:51:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.739 ************************************ 00:23:45.739 START TEST nvmf_host_multipath_status 00:23:45.739 ************************************ 00:23:45.739 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:45.997 * Looking for test storage... 00:23:45.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.997 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:45.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.998 --rc genhtml_branch_coverage=1 00:23:45.998 --rc genhtml_function_coverage=1 00:23:45.998 --rc genhtml_legend=1 00:23:45.998 --rc geninfo_all_blocks=1 00:23:45.998 --rc geninfo_unexecuted_blocks=1 00:23:45.998 00:23:45.998 ' 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:45.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.998 --rc genhtml_branch_coverage=1 00:23:45.998 --rc genhtml_function_coverage=1 00:23:45.998 --rc genhtml_legend=1 00:23:45.998 --rc geninfo_all_blocks=1 00:23:45.998 --rc geninfo_unexecuted_blocks=1 00:23:45.998 00:23:45.998 ' 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:45.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.998 --rc genhtml_branch_coverage=1 00:23:45.998 --rc genhtml_function_coverage=1 00:23:45.998 --rc genhtml_legend=1 00:23:45.998 --rc geninfo_all_blocks=1 00:23:45.998 --rc geninfo_unexecuted_blocks=1 00:23:45.998 00:23:45.998 ' 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:45.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.998 --rc genhtml_branch_coverage=1 00:23:45.998 --rc genhtml_function_coverage=1 00:23:45.998 --rc genhtml_legend=1 00:23:45.998 --rc geninfo_all_blocks=1 00:23:45.998 --rc geninfo_unexecuted_blocks=1 00:23:45.998 00:23:45.998 ' 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:45.998 16:51:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:48.529 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:48.529 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:48.529 Found net devices under 0000:09:00.0: cvl_0_0 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:48.529 Found net devices under 0000:09:00.1: cvl_0_1 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.529 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:48.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:23:48.530 00:23:48.530 --- 10.0.0.2 ping statistics --- 00:23:48.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.530 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:23:48.530 00:23:48.530 --- 10.0.0.1 ping statistics --- 00:23:48.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.530 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=2430319 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 2430319 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2430319 ']' 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.530 16:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:48.530 [2024-10-17 16:52:01.812873] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:23:48.530 [2024-10-17 16:52:01.812945] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.530 [2024-10-17 16:52:01.874523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:48.530 [2024-10-17 16:52:01.930787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.530 [2024-10-17 16:52:01.930839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.530 [2024-10-17 16:52:01.930868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.530 [2024-10-17 16:52:01.930879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.530 [2024-10-17 16:52:01.930888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.530 [2024-10-17 16:52:01.932224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.530 [2024-10-17 16:52:01.932230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.530 16:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:48.530 16:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:48.530 16:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:48.530 16:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:48.530 16:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:48.530 16:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.530 16:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2430319 00:23:48.530 16:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:48.788 [2024-10-17 16:52:02.372736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.788 16:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:49.047 Malloc0 00:23:49.305 16:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:49.565 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:49.823 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.081 [2024-10-17 16:52:03.564346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.081 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:50.340 [2024-10-17 16:52:03.837106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:50.340 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2430489 00:23:50.340 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:50.340 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.340 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2430489 /var/tmp/bdevperf.sock 00:23:50.340 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2430489 ']' 00:23:50.340 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.340 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.340 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.340 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.340 16:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:50.598 16:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.598 16:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:50.598 16:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:50.856 16:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:51.114 Nvme0n1 00:23:51.114 16:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:51.680 Nvme0n1 00:23:51.680 16:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:51.681 16:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:53.581 16:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:53.581 16:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:54.146 16:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:54.146 16:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:55.520 16:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:55.520 16:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:55.520 16:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.520 16:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:55.520 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.520 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:55.520 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.520 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:55.778 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.778 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:55.778 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.778 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:56.036 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.036 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:56.036 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.036 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:56.294 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.294 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:56.294 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.294 16:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:56.552 16:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.552 16:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:56.552 16:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.552 16:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:57.118 16:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.118 16:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:57.118 16:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:57.118 16:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:57.685 16:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:58.623 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:58.623 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:58.623 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.623 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:58.881 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.881 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:58.881 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.882 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.140 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.140 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.140 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.140 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:59.398 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.398 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:59.398 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.398 16:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:59.656 16:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.656 16:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:59.656 16:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.656 16:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:59.914 16:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.914 16:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:59.914 16:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.914 16:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.172 16:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.172 16:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:00.172 16:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:00.430 16:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:00.689 16:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:02.068 16:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:02.068 16:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:02.068 16:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.068 16:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:02.068 16:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.068 16:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:02.068 16:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.068 16:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:02.326 16:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:02.326 16:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:02.326 16:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.326 16:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:02.584 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.584 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:02.584 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.584 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:02.842 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.842 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:02.842 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.842 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.100 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.100 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:03.100 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.100 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:03.357 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.357 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:03.357 16:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:03.615 16:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:04.181 16:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:05.116 16:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:05.116 16:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:05.116 16:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.116 16:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:05.374 16:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.374 16:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:05.374 16:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.374 16:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:05.632 16:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.632 16:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:05.632 16:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.632 16:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:05.890 16:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.890 16:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:05.890 16:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.890 16:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.148 16:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.148 16:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.148 16:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.148 16:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:06.715 16:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.715 16:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:06.715 16:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.715 16:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:06.973 16:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.973 16:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:06.973 16:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:07.231 16:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:07.489 16:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:08.422 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:08.422 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:08.422 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.422 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.680 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.680 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:08.680 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.680 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.245 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.245 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.245 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.245 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.503 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.503 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.503 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.503 16:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.761 16:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.761 16:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:09.761 16:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.761 16:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:10.019 16:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.019 16:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:10.019 16:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.019 16:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:10.280 16:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.280 16:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:10.280 16:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:10.582 16:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:10.866 16:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:11.799 16:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:11.799 16:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:11.799 16:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.799 16:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:12.366 16:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.366 16:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:12.366 16:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.366 16:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:12.624 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.624 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:12.624 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.624 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:12.882 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.882 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:12.882 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.882 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:13.141 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.141 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:13.141 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.141 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:13.399 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.399 16:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:13.399 16:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.399 16:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:13.657 16:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.657 16:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:13.915 16:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:13.915 16:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:14.482 16:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:14.740 16:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:15.673 16:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:15.673 16:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:15.673 16:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.673 16:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:15.931 16:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.931 16:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:15.931 16:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.931 16:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.189 16:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.189 16:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.189 16:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.189 16:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.447 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.447 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.447 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.447 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.705 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.705 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.705 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.705 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.963 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.964 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:16.964 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.964 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.222 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.222 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:17.222 16:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:17.789 16:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:17.789 16:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:19.163 16:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:19.163 16:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:19.163 16:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.163 16:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:19.163 16:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.163 16:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:19.163 16:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.163 16:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:19.421 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.421 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:19.421 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.421 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.679 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.679 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.679 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.679 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.937 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.937 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.937 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.937 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.503 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.503 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:20.503 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.503 16:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:20.503 16:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.503 16:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:20.503 16:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:21.070 16:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:21.070 16:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:22.445 16:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:22.445 16:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:22.445 16:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.445 16:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:22.445 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.445 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:22.445 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.445 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:22.703 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.703 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:22.703 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.703 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:22.961 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.961 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:22.961 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.961 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:23.219 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.219 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:23.219 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.219 16:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:23.477 16:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.477 16:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:23.477 16:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.478 16:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:24.052 16:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.052 16:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:24.052 16:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:24.052 16:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:24.621 16:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:25.573 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:25.573 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:25.573 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.573 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:25.831 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.831 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:25.831 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.831 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:26.089 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.089 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:26.089 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.089 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:26.347 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.347 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:26.347 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.347 16:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:26.605 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.605 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:26.605 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.606 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:26.863 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.863 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:26.863 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.863 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:27.121 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.121 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2430489 00:24:27.121 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2430489 ']' 00:24:27.121 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2430489 00:24:27.121 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:27.121 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:27.121 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2430489 00:24:27.121 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:27.121 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:27.121 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2430489' 00:24:27.121 killing process with pid 2430489 00:24:27.121 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2430489 00:24:27.121 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2430489 00:24:27.121 { 00:24:27.121 "results": [ 00:24:27.121 { 00:24:27.121 "job": "Nvme0n1", 00:24:27.121 "core_mask": "0x4", 00:24:27.121 "workload": "verify", 00:24:27.121 "status": "terminated", 00:24:27.121 "verify_range": { 00:24:27.121 "start": 0, 00:24:27.121 "length": 16384 00:24:27.121 }, 00:24:27.121 "queue_depth": 128, 00:24:27.121 "io_size": 4096, 00:24:27.121 "runtime": 35.290532, 00:24:27.121 "iops": 7985.852976090017, 00:24:27.122 "mibps": 31.19473818785163, 00:24:27.122 "io_failed": 0, 00:24:27.122 "io_timeout": 0, 00:24:27.122 "avg_latency_us": 16001.285998069461, 00:24:27.122 "min_latency_us": 315.5437037037037, 00:24:27.122 "max_latency_us": 4026531.84 00:24:27.122 } 00:24:27.122 ], 00:24:27.122 "core_count": 1 00:24:27.122 } 00:24:27.393 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2430489 00:24:27.393 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:27.393 [2024-10-17 16:52:03.905550] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:24:27.393 [2024-10-17 16:52:03.905635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2430489 ] 00:24:27.393 [2024-10-17 16:52:03.966821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.393 [2024-10-17 16:52:04.025765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.393 Running I/O for 90 seconds... 00:24:27.393 8629.00 IOPS, 33.71 MiB/s [2024-10-17T14:52:41.083Z] 8577.00 IOPS, 33.50 MiB/s [2024-10-17T14:52:41.083Z] 8625.00 IOPS, 33.69 MiB/s [2024-10-17T14:52:41.083Z] 8657.00 IOPS, 33.82 MiB/s [2024-10-17T14:52:41.083Z] 8662.00 IOPS, 33.84 MiB/s [2024-10-17T14:52:41.083Z] 8629.17 IOPS, 33.71 MiB/s [2024-10-17T14:52:41.083Z] 8594.00 IOPS, 33.57 MiB/s [2024-10-17T14:52:41.083Z] 8560.00 IOPS, 33.44 MiB/s [2024-10-17T14:52:41.083Z] 8521.67 IOPS, 33.29 MiB/s [2024-10-17T14:52:41.083Z] 8538.60 IOPS, 33.35 MiB/s [2024-10-17T14:52:41.083Z] 8548.64 IOPS, 33.39 MiB/s [2024-10-17T14:52:41.083Z] 8553.00 IOPS, 33.41 MiB/s [2024-10-17T14:52:41.083Z] 8556.92 IOPS, 33.43 MiB/s [2024-10-17T14:52:41.083Z] 8562.71 IOPS, 33.45 MiB/s [2024-10-17T14:52:41.083Z] 8566.07 IOPS, 33.46 MiB/s [2024-10-17T14:52:41.083Z] [2024-10-17 16:52:20.706443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.393 [2024-10-17 16:52:20.706516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.706578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.706608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.706631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.706647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.706669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.706685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.706706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.706732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.706753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.706769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.706791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.706817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.706839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.706854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.706887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.706903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.706936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.706952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.706974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.707038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.707078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.707117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.707155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.707194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.707234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.707272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.707335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.707377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.707414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.707452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:27.393 [2024-10-17 16:52:20.707495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.393 [2024-10-17 16:52:20.707511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.707533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.707549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.707570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.707586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.707608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.707624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.707645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.707661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.707682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.707698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.707720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.707751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.707784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.707800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.707823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.707839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.707862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.707878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.707910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.707927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.707950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.707977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.708969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.708994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:27.394 [2024-10-17 16:52:20.709673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.394 [2024-10-17 16:52:20.709689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.709728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.709745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.709769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.709785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.709809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.709826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.709849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.709865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.709889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.709905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.709930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.709946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.709970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.709990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.710964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.710980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.711025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.711048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.711074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.711090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.711123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.711141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.711166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.711183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.711208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.711225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.711426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.711448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.711481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.711498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.711527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.711544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.711572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.711589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.711617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.711633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.711663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.711680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:27.395 [2024-10-17 16:52:20.711709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.395 [2024-10-17 16:52:20.711740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.711769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.711789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.711818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.711834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.711870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.711890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.711918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.711936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.711964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.711994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.396 [2024-10-17 16:52:20.712150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:20.712829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:20.712845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:27.396 8199.94 IOPS, 32.03 MiB/s [2024-10-17T14:52:41.086Z] 7717.59 IOPS, 30.15 MiB/s [2024-10-17T14:52:41.086Z] 7288.83 IOPS, 28.47 MiB/s [2024-10-17T14:52:41.086Z] 6905.21 IOPS, 26.97 MiB/s [2024-10-17T14:52:41.086Z] 6837.85 IOPS, 26.71 MiB/s [2024-10-17T14:52:41.086Z] 6903.90 IOPS, 26.97 MiB/s [2024-10-17T14:52:41.086Z] 6964.32 IOPS, 27.20 MiB/s [2024-10-17T14:52:41.086Z] 7081.22 IOPS, 27.66 MiB/s [2024-10-17T14:52:41.086Z] 7240.62 IOPS, 28.28 MiB/s [2024-10-17T14:52:41.086Z] 7390.88 IOPS, 28.87 MiB/s [2024-10-17T14:52:41.086Z] 7523.65 IOPS, 29.39 MiB/s [2024-10-17T14:52:41.086Z] 7549.74 IOPS, 29.49 MiB/s [2024-10-17T14:52:41.086Z] 7575.64 IOPS, 29.59 MiB/s [2024-10-17T14:52:41.086Z] 7599.38 IOPS, 29.69 MiB/s [2024-10-17T14:52:41.086Z] 7682.30 IOPS, 30.01 MiB/s [2024-10-17T14:52:41.086Z] 7781.84 IOPS, 30.40 MiB/s [2024-10-17T14:52:41.086Z] 7888.00 IOPS, 30.81 MiB/s [2024-10-17T14:52:41.086Z] [2024-10-17 16:52:37.987711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:37.987770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.987820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:37.987839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.987863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.396 [2024-10-17 16:52:37.987889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.987913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.396 [2024-10-17 16:52:37.987930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.987952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.396 [2024-10-17 16:52:37.987984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.988013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.396 [2024-10-17 16:52:37.988047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.988071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:37.988087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.988109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:37.988125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.988148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:37.988164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.988186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:37.988202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.988224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:37.988240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.988262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:37.988278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.988301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:37.988316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.988338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:37.988354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:27.396 [2024-10-17 16:52:37.988377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.396 [2024-10-17 16:52:37.988398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.397 [2024-10-17 16:52:37.988436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.397 [2024-10-17 16:52:37.988475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.988514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.988551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.988592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.988630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.397 [2024-10-17 16:52:37.988668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.397 [2024-10-17 16:52:37.988706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.397 [2024-10-17 16:52:37.988744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.397 [2024-10-17 16:52:37.988797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.988835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.988872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.988914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.988951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.988972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.988988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.989051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.989090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.989128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.989167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.989205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.989707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.989752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.989791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.989830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.989876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.397 [2024-10-17 16:52:37.989915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.989968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.989991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.990034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.990059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.990076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.990099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.990114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.990137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.397 [2024-10-17 16:52:37.990153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.990175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.397 [2024-10-17 16:52:37.990192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.990214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.397 [2024-10-17 16:52:37.990229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.990251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.397 [2024-10-17 16:52:37.990268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:27.397 [2024-10-17 16:52:37.990290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.990306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.990343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.990359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.990385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.990402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.990424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.990439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.990461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.990477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.398 [2024-10-17 16:52:37.991589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.398 [2024-10-17 16:52:37.991632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.398 [2024-10-17 16:52:37.991671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.398 [2024-10-17 16:52:37.991837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.398 [2024-10-17 16:52:37.991874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.991965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.991988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.992012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.992036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.992053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.992075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.992091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.992114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.398 [2024-10-17 16:52:37.992134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.992157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.398 [2024-10-17 16:52:37.992173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.992195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.992211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.992233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.398 [2024-10-17 16:52:37.992249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:27.398 [2024-10-17 16:52:37.992272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.992288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.992325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.992341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.992363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.992378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.992399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.992416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.992438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.992454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.992939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.992962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.992989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.993015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.993056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.993095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.993139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.993176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.993215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.993253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.993291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.993344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.993397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.993438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.993477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.399 [2024-10-17 16:52:37.993514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.399 [2024-10-17 16:52:37.993552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.399 [2024-10-17 16:52:37.993590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.993617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.399 [2024-10-17 16:52:37.993634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.995493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.399 [2024-10-17 16:52:37.995517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.995559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.399 [2024-10-17 16:52:37.995576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.995598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.399 [2024-10-17 16:52:37.995614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.995635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.399 [2024-10-17 16:52:37.995650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.995672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.399 [2024-10-17 16:52:37.995688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.995709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.399 [2024-10-17 16:52:37.995725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.995746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.399 [2024-10-17 16:52:37.995761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:27.399 [2024-10-17 16:52:37.995783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.995798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.995820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.400 [2024-10-17 16:52:37.995835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.995857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.400 [2024-10-17 16:52:37.995872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.995894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.995909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.995930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.400 [2024-10-17 16:52:37.995949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.995972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.996010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.996036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.996052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.996074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.996090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.996112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.996128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.996150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.996166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.996187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.996203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.996225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.996241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.996263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.996278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.996300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.996316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.996338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.996353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.996375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.400 [2024-10-17 16:52:37.996391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.996413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.400 [2024-10-17 16:52:37.996434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.999368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.999396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.999424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.999442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.999464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.999481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.999503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.400 [2024-10-17 16:52:37.999521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.999543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.400 [2024-10-17 16:52:37.999570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.999592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.400 [2024-10-17 16:52:37.999607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.999630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.400 [2024-10-17 16:52:37.999646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.999668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.400 [2024-10-17 16:52:37.999683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.999706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.400 [2024-10-17 16:52:37.999722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.999745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.400 [2024-10-17 16:52:37.999761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.999783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.999799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.400 [2024-10-17 16:52:37.999822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.400 [2024-10-17 16:52:37.999843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:37.999867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.401 [2024-10-17 16:52:37.999883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:37.999905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.401 [2024-10-17 16:52:37.999921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:37.999944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.401 [2024-10-17 16:52:37.999960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:37.999982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.401 [2024-10-17 16:52:37.999998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.401 [2024-10-17 16:52:38.000375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.401 [2024-10-17 16:52:38.000414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.401 [2024-10-17 16:52:38.000452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.401 [2024-10-17 16:52:38.000531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.401 [2024-10-17 16:52:38.000580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.401 [2024-10-17 16:52:38.000816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.401 [2024-10-17 16:52:38.000861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.401 [2024-10-17 16:52:38.000901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.000924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.000942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.001764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.401 [2024-10-17 16:52:38.001787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.401 [2024-10-17 16:52:38.001815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.402 [2024-10-17 16:52:38.001833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.001856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.001873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.001896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.001913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.001936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.402 [2024-10-17 16:52:38.001962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.001985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.402 [2024-10-17 16:52:38.002009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.402 [2024-10-17 16:52:38.002059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.402 [2024-10-17 16:52:38.002098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.002137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.002182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.002222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.002261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.002300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.002339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.002378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.002416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.402 [2024-10-17 16:52:38.002455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.002494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.002534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.002557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.002574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.003112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.402 [2024-10-17 16:52:38.003135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.003163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.003185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.003210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.003227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.003250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.003266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.003289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.003305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.003328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.402 [2024-10-17 16:52:38.003359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.003382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.003399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.003430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.003447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.003469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.003485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.003507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.402 [2024-10-17 16:52:38.003523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.003545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.402 [2024-10-17 16:52:38.003570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:27.402 [2024-10-17 16:52:38.003607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.403 [2024-10-17 16:52:38.003623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.003652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.403 [2024-10-17 16:52:38.003668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.003691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.003712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.003735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.003752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.003775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.003792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.003815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.003831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.003854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.403 [2024-10-17 16:52:38.003870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.003893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.003909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.003932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.403 [2024-10-17 16:52:38.003959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.003982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.003998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.004031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.004047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.004070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.403 [2024-10-17 16:52:38.004086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.004109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.403 [2024-10-17 16:52:38.004126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.005747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.005773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.005801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.005818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.005848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.005865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.005900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.005916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.005939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.005956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.005979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.005995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.006028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.006046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.006068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.006085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.006108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.403 [2024-10-17 16:52:38.006124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.006146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.006162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.006185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.403 [2024-10-17 16:52:38.006200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:27.403 [2024-10-17 16:52:38.006222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.404 [2024-10-17 16:52:38.006483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.404 [2024-10-17 16:52:38.006558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.404 [2024-10-17 16:52:38.006674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.404 [2024-10-17 16:52:38.006831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.404 [2024-10-17 16:52:38.006870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.006956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.006979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.404 [2024-10-17 16:52:38.006995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.007025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.007042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.404 [2024-10-17 16:52:38.009453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.009514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.009554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.009593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.009631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.009670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.404 [2024-10-17 16:52:38.009714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.404 [2024-10-17 16:52:38.009757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.404 [2024-10-17 16:52:38.009797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.009835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.009874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.009912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.009965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:27.404 [2024-10-17 16:52:38.009986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.404 [2024-10-17 16:52:38.010030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.010075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.010114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.010152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.010191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.010234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.010274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.010327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.010364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.010418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.010456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.010495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.010534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.010572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.010595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.010611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.011588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.011611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.011653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.011672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.011695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.011712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.011740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.011757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.011779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.011796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.011819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.011835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.011857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.011873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.011896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.011912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.011934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.011951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.011973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.011988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.012020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.012037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.012060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.012077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.012098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.012115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.012137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.012153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.012175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.012191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.012218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.012235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.012777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.012800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.012827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.012845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.012868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.405 [2024-10-17 16:52:38.012884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.012906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.012923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.012945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.012962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.012985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.013016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.013040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.013057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:27.405 [2024-10-17 16:52:38.013079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.405 [2024-10-17 16:52:38.013095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.013295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.013447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.013676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.013715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.013756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.013834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.013857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.013873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.014489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.014513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.014541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.014559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.014582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.014599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.014622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.014639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.014677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.014693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.014733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.014751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.014773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.014804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.014826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.014856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.014880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.014896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.014924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.014941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.014964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.014980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.015012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.015030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.015053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.015070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.015092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.015108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.015130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.015147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.016542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.016567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.016594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.016612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.016635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.016652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.016674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.016691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.016713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.016730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.016759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.406 [2024-10-17 16:52:38.016776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:27.406 [2024-10-17 16:52:38.016808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.406 [2024-10-17 16:52:38.016826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.016848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.016864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.016887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.016903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.016940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.016956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.016979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.017016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.017059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.017099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.017138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.017177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.407 [2024-10-17 16:52:38.017216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.407 [2024-10-17 16:52:38.017254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.017294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.407 [2024-10-17 16:52:38.017338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.407 [2024-10-17 16:52:38.017376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.407 [2024-10-17 16:52:38.017414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.407 [2024-10-17 16:52:38.017454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.407 [2024-10-17 16:52:38.017493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.017531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.017571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.017625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.017678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.407 [2024-10-17 16:52:38.017718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.017740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.407 [2024-10-17 16:52:38.017756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.019833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.407 [2024-10-17 16:52:38.019858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.019886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.019910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.019934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.019950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.019973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.019990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.020019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.020038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.020060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.020077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.020100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.020117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.020139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.020155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.020177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.020194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.020217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.020232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.020254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.020270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.020292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.020309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.020331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.020346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.020368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.407 [2024-10-17 16:52:38.020404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:27.407 [2024-10-17 16:52:38.020428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.020444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.020465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.020481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.020503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.020534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.020558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.020574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.020596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.020612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.021191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.021237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.021330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.021460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.021505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.021890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.021955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.021972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.022014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.022032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.022055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.408 [2024-10-17 16:52:38.022072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.023560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.023586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.023612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.023634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.023657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.023674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.023696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.023713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.023736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.023753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.023775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.023791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.023813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.023829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.023852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.023868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.023890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.023906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.023950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.023967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:27.408 [2024-10-17 16:52:38.023990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.408 [2024-10-17 16:52:38.024029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.024071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.024110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.024148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.024186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.024224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.024263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.024302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.024340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.024379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.024417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.024461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.024500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.024555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.024609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.024649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.024688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.024726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.024766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.024804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.024843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.024866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.024883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.027259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.027321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.027361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.027400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.027437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.027474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.027513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.027552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.027590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.027629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.027666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.027704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.027757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.027795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.027837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.027890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.409 [2024-10-17 16:52:38.027930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.027968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.027991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.028014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.028039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.028055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.028077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.028093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.028115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.028131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.028153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.409 [2024-10-17 16:52:38.028170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:27.409 [2024-10-17 16:52:38.028640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.028663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.028690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.028710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.028733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.028749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.028777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.028794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.028817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.028834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.028871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.028888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.028924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.028942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.028965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.028981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.029025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.029042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.029065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.029081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.029103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.029119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.029141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.029158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.029180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.029196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.029219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.029235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.029257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.029273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.029299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.029316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.029338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.029354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.029377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.029393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.030489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.030514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.030542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.030560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.030582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.030598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.030620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.030636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.030659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.030675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.030696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.030712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.030734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.030750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.030788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.030804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.030841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.030857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.030880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.030901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.030925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.030941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.030971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.030986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.031036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.031075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.031113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.031151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.031189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.031227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.031265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.031304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.031361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.031426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.031464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.031500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.031552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.410 [2024-10-17 16:52:38.031604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:27.410 [2024-10-17 16:52:38.031627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.410 [2024-10-17 16:52:38.031643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.031666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.031682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.031705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.031721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.031743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.031759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.031781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.031797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.031820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.031836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.031858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.031873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.031896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.031912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.031949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.031965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.031988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.032012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.034542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.034568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.034595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.034613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.034637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.034653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.034676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.034692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.034714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.034730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.034753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.034769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.034791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.034807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.034830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.034846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.034868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.034883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.034920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.034936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.034992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.035099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.035137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.035176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.035291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.035379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.035417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.035498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.035537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.411 [2024-10-17 16:52:38.035821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.411 [2024-10-17 16:52:38.035859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.411 [2024-10-17 16:52:38.035896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.035911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.035932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.035947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.035989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.036016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.037295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.037345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.037385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.037424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.037471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.037509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.037548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.037587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.037625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.037663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.037701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.037744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.037785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.037808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.037824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.038380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.038425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.038489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.038528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.038579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.038616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.038652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.038688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.038725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.038761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.038830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.038868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.038907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.038954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.038976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.038993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.039025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.039041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.039064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.039081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.039103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.039120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.039142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.039158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.039181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.039197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.039220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.039236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.039259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.039283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.039310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.039327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.039349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.412 [2024-10-17 16:52:38.039366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.040898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.412 [2024-10-17 16:52:38.040922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:27.412 [2024-10-17 16:52:38.040979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.413 [2024-10-17 16:52:38.041021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.413 [2024-10-17 16:52:38.041067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.413 [2024-10-17 16:52:38.041106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.413 [2024-10-17 16:52:38.041144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.413 [2024-10-17 16:52:38.041183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.413 [2024-10-17 16:52:38.041221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.413 [2024-10-17 16:52:38.041260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.413 [2024-10-17 16:52:38.041298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.413 [2024-10-17 16:52:38.041337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.413 [2024-10-17 16:52:38.041381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.413 [2024-10-17 16:52:38.041430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.413 [2024-10-17 16:52:38.041468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.413 [2024-10-17 16:52:38.041520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.413 [2024-10-17 16:52:38.041572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.413 [2024-10-17 16:52:38.041610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.413 [2024-10-17 16:52:38.041647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.413 [2024-10-17 16:52:38.041685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.413 [2024-10-17 16:52:38.041722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.413 [2024-10-17 16:52:38.041759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.413 [2024-10-17 16:52:38.041796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:27.413 [2024-10-17 16:52:38.041818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.413 [2024-10-17 16:52:38.041833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.413 7945.12 IOPS, 31.04 MiB/s [2024-10-17T14:52:41.103Z] 7966.24 IOPS, 31.12 MiB/s [2024-10-17T14:52:41.103Z] 7985.29 IOPS, 31.19 MiB/s [2024-10-17T14:52:41.103Z] Received shutdown signal, test time was about 35.291471 seconds 00:24:27.413 00:24:27.413 Latency(us) 00:24:27.413 [2024-10-17T14:52:41.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.413 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:27.413 Verification LBA range: start 0x0 length 0x4000 00:24:27.413 Nvme0n1 : 35.29 7985.85 31.19 0.00 0.00 16001.29 315.54 4026531.84 00:24:27.413 [2024-10-17T14:52:41.103Z] =================================================================================================================== 00:24:27.413 [2024-10-17T14:52:41.103Z] Total : 7985.85 31.19 0.00 0.00 16001.29 315.54 4026531.84 00:24:27.413 16:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.671 rmmod nvme_tcp 00:24:27.671 rmmod nvme_fabrics 00:24:27.671 rmmod nvme_keyring 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 2430319 ']' 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 2430319 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2430319 ']' 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2430319 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2430319 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2430319' 00:24:27.671 killing process with pid 2430319 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2430319 00:24:27.671 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2430319 00:24:27.930 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:27.930 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:27.930 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:27.930 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:27.930 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:24:27.930 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:27.930 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:24:27.930 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:27.930 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:27.930 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.930 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.930 16:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.465 00:24:30.465 real 0m44.188s 00:24:30.465 user 2m14.246s 00:24:30.465 sys 0m11.493s 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:30.465 ************************************ 00:24:30.465 END TEST nvmf_host_multipath_status 00:24:30.465 ************************************ 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.465 ************************************ 00:24:30.465 START TEST nvmf_discovery_remove_ifc 00:24:30.465 ************************************ 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:30.465 * Looking for test storage... 00:24:30.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:30.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.465 --rc genhtml_branch_coverage=1 00:24:30.465 --rc genhtml_function_coverage=1 00:24:30.465 --rc genhtml_legend=1 00:24:30.465 --rc geninfo_all_blocks=1 00:24:30.465 --rc geninfo_unexecuted_blocks=1 00:24:30.465 00:24:30.465 ' 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:30.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.465 --rc genhtml_branch_coverage=1 00:24:30.465 --rc genhtml_function_coverage=1 00:24:30.465 --rc genhtml_legend=1 00:24:30.465 --rc geninfo_all_blocks=1 00:24:30.465 --rc geninfo_unexecuted_blocks=1 00:24:30.465 00:24:30.465 ' 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:30.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.465 --rc genhtml_branch_coverage=1 00:24:30.465 --rc genhtml_function_coverage=1 00:24:30.465 --rc genhtml_legend=1 00:24:30.465 --rc geninfo_all_blocks=1 00:24:30.465 --rc geninfo_unexecuted_blocks=1 00:24:30.465 00:24:30.465 ' 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:30.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.465 --rc genhtml_branch_coverage=1 00:24:30.465 --rc genhtml_function_coverage=1 00:24:30.465 --rc genhtml_legend=1 00:24:30.465 --rc geninfo_all_blocks=1 00:24:30.465 --rc geninfo_unexecuted_blocks=1 00:24:30.465 00:24:30.465 ' 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.465 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.466 16:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:32.368 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:32.368 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:32.368 Found net devices under 0000:09:00.0: cvl_0_0 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:32.368 Found net devices under 0000:09:00.1: cvl_0_1 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.368 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:32.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:24:32.369 00:24:32.369 --- 10.0.0.2 ping statistics --- 00:24:32.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.369 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:24:32.369 00:24:32.369 --- 10.0.0.1 ping statistics --- 00:24:32.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.369 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=2444716 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 2444716 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2444716 ']' 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:32.369 16:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.369 [2024-10-17 16:52:45.797542] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:24:32.369 [2024-10-17 16:52:45.797647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.369 [2024-10-17 16:52:45.862240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.369 [2024-10-17 16:52:45.919917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.369 [2024-10-17 16:52:45.919974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.369 [2024-10-17 16:52:45.920010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.369 [2024-10-17 16:52:45.920023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.369 [2024-10-17 16:52:45.920033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.369 [2024-10-17 16:52:45.920652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.369 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.369 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:32.369 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:32.369 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:32.369 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.369 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.369 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:32.369 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.369 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.628 [2024-10-17 16:52:46.066913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.628 [2024-10-17 16:52:46.075126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:32.628 null0 00:24:32.628 [2024-10-17 16:52:46.107086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.628 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.628 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2444799 00:24:32.628 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:32.628 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2444799 /tmp/host.sock 00:24:32.628 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2444799 ']' 00:24:32.628 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:32.628 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:32.628 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:32.628 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:32.628 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:32.628 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.628 [2024-10-17 16:52:46.172857] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:24:32.628 [2024-10-17 16:52:46.172953] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444799 ] 00:24:32.628 [2024-10-17 16:52:46.250530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.628 [2024-10-17 16:52:46.313418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.887 16:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.268 [2024-10-17 16:52:47.524212] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:34.268 [2024-10-17 16:52:47.524253] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:34.268 [2024-10-17 16:52:47.524292] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:34.268 [2024-10-17 16:52:47.610560] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:34.268 [2024-10-17 16:52:47.707836] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:34.268 [2024-10-17 16:52:47.707896] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:34.268 [2024-10-17 16:52:47.707934] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:34.268 [2024-10-17 16:52:47.707956] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:34.268 [2024-10-17 16:52:47.707986] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:34.268 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.268 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:34.268 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.268 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.268 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.268 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.268 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.268 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.268 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:34.269 16:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:35.203 16:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.203 16:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.203 16:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.203 16:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.203 16:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.203 16:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.203 16:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.203 16:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.203 16:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:35.203 16:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.576 16:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.576 16:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.576 16:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.576 16:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.576 16:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.576 16:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.576 16:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.576 16:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.576 16:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:36.576 16:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.511 16:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.511 16:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.511 16:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.511 16:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.511 16:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.511 16:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.511 16:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.511 16:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.511 16:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:37.511 16:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:38.444 16:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.444 16:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.444 16:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.444 16:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.444 16:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.444 16:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.444 16:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.444 16:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.445 16:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:38.445 16:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.377 16:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.377 16:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.377 16:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.377 16:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.377 16:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.377 16:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.377 16:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.377 16:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.377 16:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:39.377 16:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.636 [2024-10-17 16:52:53.150102] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:39.636 [2024-10-17 16:52:53.150180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.636 [2024-10-17 16:52:53.150202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.636 [2024-10-17 16:52:53.150219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.636 [2024-10-17 16:52:53.150233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.636 [2024-10-17 16:52:53.150247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.636 [2024-10-17 16:52:53.150260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.636 [2024-10-17 16:52:53.150273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.636 [2024-10-17 16:52:53.150286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.636 [2024-10-17 16:52:53.150323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.636 [2024-10-17 16:52:53.150336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.636 [2024-10-17 16:52:53.150348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20478d0 is same with the state(6) to be set 00:24:39.636 [2024-10-17 16:52:53.160123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20478d0 (9): Bad file descriptor 00:24:39.636 [2024-10-17 16:52:53.170217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.636 [2024-10-17 16:52:53.170242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.636 [2024-10-17 16:52:53.170287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:64 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.636 [2024-10-17 16:52:53.170305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.636 [2024-10-17 16:52:53.170322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.636 [2024-10-17 16:52:53.170359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.636 [2024-10-17 16:52:53.170462] bdev_nvme.c:1722:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x206b050 was disconnected and freed in a reset ctrlr sequence. 00:24:39.636 [2024-10-17 16:52:53.170486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:40.569 16:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.569 16:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.569 16:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.569 16:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.569 16:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.569 16:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.569 16:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.569 [2024-10-17 16:52:54.214064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:40.569 [2024-10-17 16:52:54.214137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20478d0 with addr=10.0.0.2, port=4420 00:24:40.569 [2024-10-17 16:52:54.214168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20478d0 is same with the state(6) to be set 00:24:40.569 [2024-10-17 16:52:54.214230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20478d0 (9): Bad file descriptor 00:24:40.569 [2024-10-17 16:52:54.214711] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:40.569 [2024-10-17 16:52:54.214780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:40.569 [2024-10-17 16:52:54.214802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:40.569 [2024-10-17 16:52:54.214821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:40.569 [2024-10-17 16:52:54.214861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.569 [2024-10-17 16:52:54.214881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:40.569 16:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.570 16:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:40.570 16:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.945 [2024-10-17 16:52:55.217271] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev nvme0n1: Input/output error 00:24:41.945 [2024-10-17 16:52:55.217344] vbdev_gpt.c: 467:gpt_bdev_complete: *ERROR*: Gpt: bdev=nvme0n1 io error 00:24:41.945 [2024-10-17 16:52:55.217523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:41.945 [2024-10-17 16:52:55.217550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:41.945 [2024-10-17 16:52:55.217566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:41.945 [2024-10-17 16:52:55.217581] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:41.945 [2024-10-17 16:52:55.217957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.945 [2024-10-17 16:52:55.218013] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:41.945 [2024-10-17 16:52:55.218072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.945 [2024-10-17 16:52:55.218114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.945 [2024-10-17 16:52:55.218135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.945 [2024-10-17 16:52:55.218149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.945 [2024-10-17 16:52:55.218162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.945 [2024-10-17 16:52:55.218175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.945 [2024-10-17 16:52:55.218188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.945 [2024-10-17 16:52:55.218202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.945 [2024-10-17 16:52:55.218215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.945 [2024-10-17 16:52:55.218228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.945 [2024-10-17 16:52:55.218240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:41.945 [2024-10-17 16:52:55.218496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036c00 (9): Bad file descriptor 00:24:41.945 [2024-10-17 16:52:55.219516] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:41.945 [2024-10-17 16:52:55.219543] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.945 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.946 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.946 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.946 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.946 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.946 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:41.946 16:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:42.961 16:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:42.961 16:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.961 16:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:42.961 16:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.961 16:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.961 16:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:42.961 16:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:42.961 16:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.961 16:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:42.961 16:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.894 [2024-10-17 16:52:57.272163] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:43.894 [2024-10-17 16:52:57.272192] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:43.894 [2024-10-17 16:52:57.272216] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:43.894 [2024-10-17 16:52:57.358496] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:43.894 16:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.894 16:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.894 16:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.894 16:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.894 16:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.894 16:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.894 16:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.894 16:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.894 16:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:43.894 16:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.894 [2024-10-17 16:52:57.581977] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:43.894 [2024-10-17 16:52:57.582065] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:43.894 [2024-10-17 16:52:57.582099] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:43.894 [2024-10-17 16:52:57.582120] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:43.894 [2024-10-17 16:52:57.582133] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:44.152 [2024-10-17 16:52:57.629660] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x204bd00 was disconnected and freed. delete nvme_qpair. 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2444799 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2444799 ']' 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2444799 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2444799 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:45.086 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:45.087 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2444799' 00:24:45.087 killing process with pid 2444799 00:24:45.087 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2444799 00:24:45.087 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2444799 00:24:45.087 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:45.087 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:45.087 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:45.087 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.087 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:45.087 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.087 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.087 rmmod nvme_tcp 00:24:45.087 rmmod nvme_fabrics 00:24:45.087 rmmod nvme_keyring 00:24:45.087 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:45.087 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 2444716 ']' 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 2444716 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2444716 ']' 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2444716 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2444716 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2444716' 00:24:45.347 killing process with pid 2444716 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2444716 00:24:45.347 16:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2444716 00:24:45.347 [2024-10-17 16:52:58.813281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b708a0 is same with the state(6) to be set 00:24:45.347 [2024-10-17 16:52:58.813340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b708a0 is same with the state(6) to be set 00:24:45.347 16:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:45.347 16:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:45.347 16:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:45.347 16:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:45.347 16:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:24:45.347 16:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:24:45.347 16:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:45.347 16:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:45.347 16:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:45.347 16:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.347 16:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.347 16:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:47.883 00:24:47.883 real 0m17.426s 00:24:47.883 user 0m25.211s 00:24:47.883 sys 0m3.084s 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.883 ************************************ 00:24:47.883 END TEST nvmf_discovery_remove_ifc 00:24:47.883 ************************************ 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.883 ************************************ 00:24:47.883 START TEST nvmf_identify_kernel_target 00:24:47.883 ************************************ 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:47.883 * Looking for test storage... 00:24:47.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:47.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.883 --rc genhtml_branch_coverage=1 00:24:47.883 --rc genhtml_function_coverage=1 00:24:47.883 --rc genhtml_legend=1 00:24:47.883 --rc geninfo_all_blocks=1 00:24:47.883 --rc geninfo_unexecuted_blocks=1 00:24:47.883 00:24:47.883 ' 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:47.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.883 --rc genhtml_branch_coverage=1 00:24:47.883 --rc genhtml_function_coverage=1 00:24:47.883 --rc genhtml_legend=1 00:24:47.883 --rc geninfo_all_blocks=1 00:24:47.883 --rc geninfo_unexecuted_blocks=1 00:24:47.883 00:24:47.883 ' 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:47.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.883 --rc genhtml_branch_coverage=1 00:24:47.883 --rc genhtml_function_coverage=1 00:24:47.883 --rc genhtml_legend=1 00:24:47.883 --rc geninfo_all_blocks=1 00:24:47.883 --rc geninfo_unexecuted_blocks=1 00:24:47.883 00:24:47.883 ' 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:47.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.883 --rc genhtml_branch_coverage=1 00:24:47.883 --rc genhtml_function_coverage=1 00:24:47.883 --rc genhtml_legend=1 00:24:47.883 --rc geninfo_all_blocks=1 00:24:47.883 --rc geninfo_unexecuted_blocks=1 00:24:47.883 00:24:47.883 ' 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.883 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:47.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:47.884 16:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:49.787 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:49.787 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:49.787 Found net devices under 0000:09:00.0: cvl_0_0 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:49.787 Found net devices under 0000:09:00.1: cvl_0_1 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.787 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:49.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:24:49.788 00:24:49.788 --- 10.0.0.2 ping statistics --- 00:24:49.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.788 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:24:49.788 00:24:49.788 --- 10.0.0.1 ping statistics --- 00:24:49.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.788 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:49.788 16:53:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:50.723 Waiting for block devices as requested 00:24:50.723 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:50.981 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:50.981 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:50.981 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:50.981 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:51.239 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:51.239 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:51.239 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:51.239 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:24:51.497 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:51.497 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:51.497 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:51.755 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:51.755 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:51.755 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:51.755 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:52.013 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:52.013 No valid GPT data, bailing 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:24:52.013 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:52.273 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:24:52.273 00:24:52.273 Discovery Log Number of Records 2, Generation counter 2 00:24:52.273 =====Discovery Log Entry 0====== 00:24:52.273 trtype: tcp 00:24:52.273 adrfam: ipv4 00:24:52.273 subtype: current discovery subsystem 00:24:52.273 treq: not specified, sq flow control disable supported 00:24:52.273 portid: 1 00:24:52.273 trsvcid: 4420 00:24:52.273 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:52.273 traddr: 10.0.0.1 00:24:52.273 eflags: none 00:24:52.273 sectype: none 00:24:52.273 =====Discovery Log Entry 1====== 00:24:52.273 trtype: tcp 00:24:52.273 adrfam: ipv4 00:24:52.273 subtype: nvme subsystem 00:24:52.273 treq: not specified, sq flow control disable supported 00:24:52.273 portid: 1 00:24:52.273 trsvcid: 4420 00:24:52.273 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:52.273 traddr: 10.0.0.1 00:24:52.273 eflags: none 00:24:52.273 sectype: none 00:24:52.273 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:52.273 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:52.273 ===================================================== 00:24:52.273 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:52.273 ===================================================== 00:24:52.273 Controller Capabilities/Features 00:24:52.273 ================================ 00:24:52.273 Vendor ID: 0000 00:24:52.273 Subsystem Vendor ID: 0000 00:24:52.273 Serial Number: bf47706d8617230cdb76 00:24:52.273 Model Number: Linux 00:24:52.273 Firmware Version: 6.8.9-20 00:24:52.273 Recommended Arb Burst: 0 00:24:52.273 IEEE OUI Identifier: 00 00 00 00:24:52.273 Multi-path I/O 00:24:52.273 May have multiple subsystem ports: No 00:24:52.273 May have multiple controllers: No 00:24:52.273 Associated with SR-IOV VF: No 00:24:52.273 Max Data Transfer Size: Unlimited 00:24:52.273 Max Number of Namespaces: 0 00:24:52.274 Max Number of I/O Queues: 1024 00:24:52.274 NVMe Specification Version (VS): 1.3 00:24:52.274 NVMe Specification Version (Identify): 1.3 00:24:52.274 Maximum Queue Entries: 1024 00:24:52.274 Contiguous Queues Required: No 00:24:52.274 Arbitration Mechanisms Supported 00:24:52.274 Weighted Round Robin: Not Supported 00:24:52.274 Vendor Specific: Not Supported 00:24:52.274 Reset Timeout: 7500 ms 00:24:52.274 Doorbell Stride: 4 bytes 00:24:52.274 NVM Subsystem Reset: Not Supported 00:24:52.274 Command Sets Supported 00:24:52.274 NVM Command Set: Supported 00:24:52.274 Boot Partition: Not Supported 00:24:52.274 Memory Page Size Minimum: 4096 bytes 00:24:52.274 Memory Page Size Maximum: 4096 bytes 00:24:52.274 Persistent Memory Region: Not Supported 00:24:52.274 Optional Asynchronous Events Supported 00:24:52.274 Namespace Attribute Notices: Not Supported 00:24:52.274 Firmware Activation Notices: Not Supported 00:24:52.274 ANA Change Notices: Not Supported 00:24:52.274 PLE Aggregate Log Change Notices: Not Supported 00:24:52.274 LBA Status Info Alert Notices: Not Supported 00:24:52.274 EGE Aggregate Log Change Notices: Not Supported 00:24:52.274 Normal NVM Subsystem Shutdown event: Not Supported 00:24:52.274 Zone Descriptor Change Notices: Not Supported 00:24:52.274 Discovery Log Change Notices: Supported 00:24:52.274 Controller Attributes 00:24:52.274 128-bit Host Identifier: Not Supported 00:24:52.274 Non-Operational Permissive Mode: Not Supported 00:24:52.274 NVM Sets: Not Supported 00:24:52.274 Read Recovery Levels: Not Supported 00:24:52.274 Endurance Groups: Not Supported 00:24:52.274 Predictable Latency Mode: Not Supported 00:24:52.274 Traffic Based Keep ALive: Not Supported 00:24:52.274 Namespace Granularity: Not Supported 00:24:52.274 SQ Associations: Not Supported 00:24:52.274 UUID List: Not Supported 00:24:52.274 Multi-Domain Subsystem: Not Supported 00:24:52.274 Fixed Capacity Management: Not Supported 00:24:52.274 Variable Capacity Management: Not Supported 00:24:52.274 Delete Endurance Group: Not Supported 00:24:52.274 Delete NVM Set: Not Supported 00:24:52.274 Extended LBA Formats Supported: Not Supported 00:24:52.274 Flexible Data Placement Supported: Not Supported 00:24:52.274 00:24:52.274 Controller Memory Buffer Support 00:24:52.274 ================================ 00:24:52.274 Supported: No 00:24:52.274 00:24:52.274 Persistent Memory Region Support 00:24:52.274 ================================ 00:24:52.274 Supported: No 00:24:52.274 00:24:52.274 Admin Command Set Attributes 00:24:52.274 ============================ 00:24:52.274 Security Send/Receive: Not Supported 00:24:52.274 Format NVM: Not Supported 00:24:52.274 Firmware Activate/Download: Not Supported 00:24:52.274 Namespace Management: Not Supported 00:24:52.274 Device Self-Test: Not Supported 00:24:52.274 Directives: Not Supported 00:24:52.274 NVMe-MI: Not Supported 00:24:52.274 Virtualization Management: Not Supported 00:24:52.274 Doorbell Buffer Config: Not Supported 00:24:52.274 Get LBA Status Capability: Not Supported 00:24:52.274 Command & Feature Lockdown Capability: Not Supported 00:24:52.274 Abort Command Limit: 1 00:24:52.274 Async Event Request Limit: 1 00:24:52.274 Number of Firmware Slots: N/A 00:24:52.274 Firmware Slot 1 Read-Only: N/A 00:24:52.274 Firmware Activation Without Reset: N/A 00:24:52.274 Multiple Update Detection Support: N/A 00:24:52.274 Firmware Update Granularity: No Information Provided 00:24:52.274 Per-Namespace SMART Log: No 00:24:52.274 Asymmetric Namespace Access Log Page: Not Supported 00:24:52.274 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:52.274 Command Effects Log Page: Not Supported 00:24:52.274 Get Log Page Extended Data: Supported 00:24:52.274 Telemetry Log Pages: Not Supported 00:24:52.274 Persistent Event Log Pages: Not Supported 00:24:52.274 Supported Log Pages Log Page: May Support 00:24:52.274 Commands Supported & Effects Log Page: Not Supported 00:24:52.274 Feature Identifiers & Effects Log Page:May Support 00:24:52.274 NVMe-MI Commands & Effects Log Page: May Support 00:24:52.274 Data Area 4 for Telemetry Log: Not Supported 00:24:52.274 Error Log Page Entries Supported: 1 00:24:52.274 Keep Alive: Not Supported 00:24:52.274 00:24:52.274 NVM Command Set Attributes 00:24:52.274 ========================== 00:24:52.274 Submission Queue Entry Size 00:24:52.274 Max: 1 00:24:52.274 Min: 1 00:24:52.274 Completion Queue Entry Size 00:24:52.274 Max: 1 00:24:52.274 Min: 1 00:24:52.274 Number of Namespaces: 0 00:24:52.274 Compare Command: Not Supported 00:24:52.274 Write Uncorrectable Command: Not Supported 00:24:52.274 Dataset Management Command: Not Supported 00:24:52.274 Write Zeroes Command: Not Supported 00:24:52.274 Set Features Save Field: Not Supported 00:24:52.274 Reservations: Not Supported 00:24:52.274 Timestamp: Not Supported 00:24:52.274 Copy: Not Supported 00:24:52.274 Volatile Write Cache: Not Present 00:24:52.274 Atomic Write Unit (Normal): 1 00:24:52.274 Atomic Write Unit (PFail): 1 00:24:52.274 Atomic Compare & Write Unit: 1 00:24:52.274 Fused Compare & Write: Not Supported 00:24:52.274 Scatter-Gather List 00:24:52.274 SGL Command Set: Supported 00:24:52.274 SGL Keyed: Not Supported 00:24:52.274 SGL Bit Bucket Descriptor: Not Supported 00:24:52.274 SGL Metadata Pointer: Not Supported 00:24:52.274 Oversized SGL: Not Supported 00:24:52.274 SGL Metadata Address: Not Supported 00:24:52.274 SGL Offset: Supported 00:24:52.274 Transport SGL Data Block: Not Supported 00:24:52.274 Replay Protected Memory Block: Not Supported 00:24:52.274 00:24:52.274 Firmware Slot Information 00:24:52.274 ========================= 00:24:52.274 Active slot: 0 00:24:52.274 00:24:52.274 00:24:52.274 Error Log 00:24:52.274 ========= 00:24:52.274 00:24:52.274 Active Namespaces 00:24:52.274 ================= 00:24:52.274 Discovery Log Page 00:24:52.274 ================== 00:24:52.274 Generation Counter: 2 00:24:52.274 Number of Records: 2 00:24:52.274 Record Format: 0 00:24:52.274 00:24:52.274 Discovery Log Entry 0 00:24:52.274 ---------------------- 00:24:52.274 Transport Type: 3 (TCP) 00:24:52.274 Address Family: 1 (IPv4) 00:24:52.274 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:52.274 Entry Flags: 00:24:52.274 Duplicate Returned Information: 0 00:24:52.274 Explicit Persistent Connection Support for Discovery: 0 00:24:52.274 Transport Requirements: 00:24:52.274 Secure Channel: Not Specified 00:24:52.274 Port ID: 1 (0x0001) 00:24:52.274 Controller ID: 65535 (0xffff) 00:24:52.274 Admin Max SQ Size: 32 00:24:52.274 Transport Service Identifier: 4420 00:24:52.274 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:52.274 Transport Address: 10.0.0.1 00:24:52.274 Discovery Log Entry 1 00:24:52.274 ---------------------- 00:24:52.274 Transport Type: 3 (TCP) 00:24:52.274 Address Family: 1 (IPv4) 00:24:52.274 Subsystem Type: 2 (NVM Subsystem) 00:24:52.274 Entry Flags: 00:24:52.274 Duplicate Returned Information: 0 00:24:52.274 Explicit Persistent Connection Support for Discovery: 0 00:24:52.274 Transport Requirements: 00:24:52.274 Secure Channel: Not Specified 00:24:52.274 Port ID: 1 (0x0001) 00:24:52.274 Controller ID: 65535 (0xffff) 00:24:52.274 Admin Max SQ Size: 32 00:24:52.274 Transport Service Identifier: 4420 00:24:52.274 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:52.274 Transport Address: 10.0.0.1 00:24:52.274 16:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:52.534 get_feature(0x01) failed 00:24:52.534 get_feature(0x02) failed 00:24:52.534 get_feature(0x04) failed 00:24:52.534 ===================================================== 00:24:52.534 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:52.534 ===================================================== 00:24:52.534 Controller Capabilities/Features 00:24:52.534 ================================ 00:24:52.534 Vendor ID: 0000 00:24:52.534 Subsystem Vendor ID: 0000 00:24:52.534 Serial Number: 3349f57da5799a7b4440 00:24:52.534 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:52.534 Firmware Version: 6.8.9-20 00:24:52.534 Recommended Arb Burst: 6 00:24:52.534 IEEE OUI Identifier: 00 00 00 00:24:52.534 Multi-path I/O 00:24:52.534 May have multiple subsystem ports: Yes 00:24:52.534 May have multiple controllers: Yes 00:24:52.534 Associated with SR-IOV VF: No 00:24:52.534 Max Data Transfer Size: Unlimited 00:24:52.534 Max Number of Namespaces: 1024 00:24:52.534 Max Number of I/O Queues: 128 00:24:52.534 NVMe Specification Version (VS): 1.3 00:24:52.534 NVMe Specification Version (Identify): 1.3 00:24:52.534 Maximum Queue Entries: 1024 00:24:52.534 Contiguous Queues Required: No 00:24:52.534 Arbitration Mechanisms Supported 00:24:52.534 Weighted Round Robin: Not Supported 00:24:52.534 Vendor Specific: Not Supported 00:24:52.534 Reset Timeout: 7500 ms 00:24:52.534 Doorbell Stride: 4 bytes 00:24:52.534 NVM Subsystem Reset: Not Supported 00:24:52.534 Command Sets Supported 00:24:52.534 NVM Command Set: Supported 00:24:52.534 Boot Partition: Not Supported 00:24:52.534 Memory Page Size Minimum: 4096 bytes 00:24:52.534 Memory Page Size Maximum: 4096 bytes 00:24:52.534 Persistent Memory Region: Not Supported 00:24:52.534 Optional Asynchronous Events Supported 00:24:52.534 Namespace Attribute Notices: Supported 00:24:52.534 Firmware Activation Notices: Not Supported 00:24:52.534 ANA Change Notices: Supported 00:24:52.534 PLE Aggregate Log Change Notices: Not Supported 00:24:52.534 LBA Status Info Alert Notices: Not Supported 00:24:52.534 EGE Aggregate Log Change Notices: Not Supported 00:24:52.534 Normal NVM Subsystem Shutdown event: Not Supported 00:24:52.534 Zone Descriptor Change Notices: Not Supported 00:24:52.534 Discovery Log Change Notices: Not Supported 00:24:52.534 Controller Attributes 00:24:52.534 128-bit Host Identifier: Supported 00:24:52.534 Non-Operational Permissive Mode: Not Supported 00:24:52.534 NVM Sets: Not Supported 00:24:52.534 Read Recovery Levels: Not Supported 00:24:52.534 Endurance Groups: Not Supported 00:24:52.534 Predictable Latency Mode: Not Supported 00:24:52.534 Traffic Based Keep ALive: Supported 00:24:52.534 Namespace Granularity: Not Supported 00:24:52.534 SQ Associations: Not Supported 00:24:52.534 UUID List: Not Supported 00:24:52.534 Multi-Domain Subsystem: Not Supported 00:24:52.534 Fixed Capacity Management: Not Supported 00:24:52.534 Variable Capacity Management: Not Supported 00:24:52.534 Delete Endurance Group: Not Supported 00:24:52.534 Delete NVM Set: Not Supported 00:24:52.534 Extended LBA Formats Supported: Not Supported 00:24:52.534 Flexible Data Placement Supported: Not Supported 00:24:52.534 00:24:52.534 Controller Memory Buffer Support 00:24:52.534 ================================ 00:24:52.534 Supported: No 00:24:52.534 00:24:52.534 Persistent Memory Region Support 00:24:52.534 ================================ 00:24:52.534 Supported: No 00:24:52.534 00:24:52.534 Admin Command Set Attributes 00:24:52.534 ============================ 00:24:52.534 Security Send/Receive: Not Supported 00:24:52.534 Format NVM: Not Supported 00:24:52.534 Firmware Activate/Download: Not Supported 00:24:52.534 Namespace Management: Not Supported 00:24:52.534 Device Self-Test: Not Supported 00:24:52.534 Directives: Not Supported 00:24:52.534 NVMe-MI: Not Supported 00:24:52.534 Virtualization Management: Not Supported 00:24:52.534 Doorbell Buffer Config: Not Supported 00:24:52.534 Get LBA Status Capability: Not Supported 00:24:52.535 Command & Feature Lockdown Capability: Not Supported 00:24:52.535 Abort Command Limit: 4 00:24:52.535 Async Event Request Limit: 4 00:24:52.535 Number of Firmware Slots: N/A 00:24:52.535 Firmware Slot 1 Read-Only: N/A 00:24:52.535 Firmware Activation Without Reset: N/A 00:24:52.535 Multiple Update Detection Support: N/A 00:24:52.535 Firmware Update Granularity: No Information Provided 00:24:52.535 Per-Namespace SMART Log: Yes 00:24:52.535 Asymmetric Namespace Access Log Page: Supported 00:24:52.535 ANA Transition Time : 10 sec 00:24:52.535 00:24:52.535 Asymmetric Namespace Access Capabilities 00:24:52.535 ANA Optimized State : Supported 00:24:52.535 ANA Non-Optimized State : Supported 00:24:52.535 ANA Inaccessible State : Supported 00:24:52.535 ANA Persistent Loss State : Supported 00:24:52.535 ANA Change State : Supported 00:24:52.535 ANAGRPID is not changed : No 00:24:52.535 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:52.535 00:24:52.535 ANA Group Identifier Maximum : 128 00:24:52.535 Number of ANA Group Identifiers : 128 00:24:52.535 Max Number of Allowed Namespaces : 1024 00:24:52.535 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:52.535 Command Effects Log Page: Supported 00:24:52.535 Get Log Page Extended Data: Supported 00:24:52.535 Telemetry Log Pages: Not Supported 00:24:52.535 Persistent Event Log Pages: Not Supported 00:24:52.535 Supported Log Pages Log Page: May Support 00:24:52.535 Commands Supported & Effects Log Page: Not Supported 00:24:52.535 Feature Identifiers & Effects Log Page:May Support 00:24:52.535 NVMe-MI Commands & Effects Log Page: May Support 00:24:52.535 Data Area 4 for Telemetry Log: Not Supported 00:24:52.535 Error Log Page Entries Supported: 128 00:24:52.535 Keep Alive: Supported 00:24:52.535 Keep Alive Granularity: 1000 ms 00:24:52.535 00:24:52.535 NVM Command Set Attributes 00:24:52.535 ========================== 00:24:52.535 Submission Queue Entry Size 00:24:52.535 Max: 64 00:24:52.535 Min: 64 00:24:52.535 Completion Queue Entry Size 00:24:52.535 Max: 16 00:24:52.535 Min: 16 00:24:52.535 Number of Namespaces: 1024 00:24:52.535 Compare Command: Not Supported 00:24:52.535 Write Uncorrectable Command: Not Supported 00:24:52.535 Dataset Management Command: Supported 00:24:52.535 Write Zeroes Command: Supported 00:24:52.535 Set Features Save Field: Not Supported 00:24:52.535 Reservations: Not Supported 00:24:52.535 Timestamp: Not Supported 00:24:52.535 Copy: Not Supported 00:24:52.535 Volatile Write Cache: Present 00:24:52.535 Atomic Write Unit (Normal): 1 00:24:52.535 Atomic Write Unit (PFail): 1 00:24:52.535 Atomic Compare & Write Unit: 1 00:24:52.535 Fused Compare & Write: Not Supported 00:24:52.535 Scatter-Gather List 00:24:52.535 SGL Command Set: Supported 00:24:52.535 SGL Keyed: Not Supported 00:24:52.535 SGL Bit Bucket Descriptor: Not Supported 00:24:52.535 SGL Metadata Pointer: Not Supported 00:24:52.535 Oversized SGL: Not Supported 00:24:52.535 SGL Metadata Address: Not Supported 00:24:52.535 SGL Offset: Supported 00:24:52.535 Transport SGL Data Block: Not Supported 00:24:52.535 Replay Protected Memory Block: Not Supported 00:24:52.535 00:24:52.535 Firmware Slot Information 00:24:52.535 ========================= 00:24:52.535 Active slot: 0 00:24:52.535 00:24:52.535 Asymmetric Namespace Access 00:24:52.535 =========================== 00:24:52.535 Change Count : 0 00:24:52.535 Number of ANA Group Descriptors : 1 00:24:52.535 ANA Group Descriptor : 0 00:24:52.535 ANA Group ID : 1 00:24:52.535 Number of NSID Values : 1 00:24:52.535 Change Count : 0 00:24:52.535 ANA State : 1 00:24:52.535 Namespace Identifier : 1 00:24:52.535 00:24:52.535 Commands Supported and Effects 00:24:52.535 ============================== 00:24:52.535 Admin Commands 00:24:52.535 -------------- 00:24:52.535 Get Log Page (02h): Supported 00:24:52.535 Identify (06h): Supported 00:24:52.535 Abort (08h): Supported 00:24:52.535 Set Features (09h): Supported 00:24:52.535 Get Features (0Ah): Supported 00:24:52.535 Asynchronous Event Request (0Ch): Supported 00:24:52.535 Keep Alive (18h): Supported 00:24:52.535 I/O Commands 00:24:52.535 ------------ 00:24:52.535 Flush (00h): Supported 00:24:52.535 Write (01h): Supported LBA-Change 00:24:52.535 Read (02h): Supported 00:24:52.535 Write Zeroes (08h): Supported LBA-Change 00:24:52.535 Dataset Management (09h): Supported 00:24:52.535 00:24:52.535 Error Log 00:24:52.535 ========= 00:24:52.535 Entry: 0 00:24:52.535 Error Count: 0x3 00:24:52.535 Submission Queue Id: 0x0 00:24:52.535 Command Id: 0x5 00:24:52.535 Phase Bit: 0 00:24:52.535 Status Code: 0x2 00:24:52.535 Status Code Type: 0x0 00:24:52.535 Do Not Retry: 1 00:24:52.535 Error Location: 0x28 00:24:52.535 LBA: 0x0 00:24:52.535 Namespace: 0x0 00:24:52.535 Vendor Log Page: 0x0 00:24:52.535 ----------- 00:24:52.535 Entry: 1 00:24:52.535 Error Count: 0x2 00:24:52.535 Submission Queue Id: 0x0 00:24:52.535 Command Id: 0x5 00:24:52.535 Phase Bit: 0 00:24:52.535 Status Code: 0x2 00:24:52.535 Status Code Type: 0x0 00:24:52.535 Do Not Retry: 1 00:24:52.535 Error Location: 0x28 00:24:52.535 LBA: 0x0 00:24:52.535 Namespace: 0x0 00:24:52.535 Vendor Log Page: 0x0 00:24:52.535 ----------- 00:24:52.535 Entry: 2 00:24:52.535 Error Count: 0x1 00:24:52.535 Submission Queue Id: 0x0 00:24:52.535 Command Id: 0x4 00:24:52.535 Phase Bit: 0 00:24:52.535 Status Code: 0x2 00:24:52.535 Status Code Type: 0x0 00:24:52.535 Do Not Retry: 1 00:24:52.535 Error Location: 0x28 00:24:52.535 LBA: 0x0 00:24:52.535 Namespace: 0x0 00:24:52.535 Vendor Log Page: 0x0 00:24:52.535 00:24:52.535 Number of Queues 00:24:52.535 ================ 00:24:52.535 Number of I/O Submission Queues: 128 00:24:52.535 Number of I/O Completion Queues: 128 00:24:52.535 00:24:52.535 ZNS Specific Controller Data 00:24:52.535 ============================ 00:24:52.535 Zone Append Size Limit: 0 00:24:52.535 00:24:52.535 00:24:52.535 Active Namespaces 00:24:52.535 ================= 00:24:52.535 get_feature(0x05) failed 00:24:52.535 Namespace ID:1 00:24:52.535 Command Set Identifier: NVM (00h) 00:24:52.535 Deallocate: Supported 00:24:52.535 Deallocated/Unwritten Error: Not Supported 00:24:52.535 Deallocated Read Value: Unknown 00:24:52.535 Deallocate in Write Zeroes: Not Supported 00:24:52.535 Deallocated Guard Field: 0xFFFF 00:24:52.535 Flush: Supported 00:24:52.535 Reservation: Not Supported 00:24:52.535 Namespace Sharing Capabilities: Multiple Controllers 00:24:52.535 Size (in LBAs): 1953525168 (931GiB) 00:24:52.535 Capacity (in LBAs): 1953525168 (931GiB) 00:24:52.535 Utilization (in LBAs): 1953525168 (931GiB) 00:24:52.535 UUID: 29bdb581-275f-4a47-825f-e1a20ba82d42 00:24:52.535 Thin Provisioning: Not Supported 00:24:52.535 Per-NS Atomic Units: Yes 00:24:52.535 Atomic Boundary Size (Normal): 0 00:24:52.535 Atomic Boundary Size (PFail): 0 00:24:52.535 Atomic Boundary Offset: 0 00:24:52.535 NGUID/EUI64 Never Reused: No 00:24:52.535 ANA group ID: 1 00:24:52.535 Namespace Write Protected: No 00:24:52.535 Number of LBA Formats: 1 00:24:52.535 Current LBA Format: LBA Format #00 00:24:52.535 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:52.535 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:52.535 rmmod nvme_tcp 00:24:52.535 rmmod nvme_fabrics 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.535 16:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.441 16:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:54.441 16:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:54.441 16:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:54.441 16:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:24:54.441 16:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.441 16:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:54.441 16:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:54.441 16:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.441 16:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:24:54.441 16:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:24:54.699 16:53:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:55.633 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:55.633 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:55.633 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:55.633 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:55.633 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:55.633 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:55.633 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:55.633 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:55.633 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:55.633 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:55.633 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:55.633 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:55.633 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:55.633 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:55.633 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:55.633 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:56.570 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:24:56.829 00:24:56.829 real 0m9.246s 00:24:56.829 user 0m1.895s 00:24:56.829 sys 0m3.337s 00:24:56.829 16:53:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:56.829 16:53:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.829 ************************************ 00:24:56.829 END TEST nvmf_identify_kernel_target 00:24:56.829 ************************************ 00:24:56.829 16:53:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:56.829 16:53:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:56.829 16:53:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:56.829 16:53:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.829 ************************************ 00:24:56.829 START TEST nvmf_auth_host 00:24:56.829 ************************************ 00:24:56.829 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:56.829 * Looking for test storage... 00:24:56.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:56.829 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:56.829 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:56.829 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:57.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.088 --rc genhtml_branch_coverage=1 00:24:57.088 --rc genhtml_function_coverage=1 00:24:57.088 --rc genhtml_legend=1 00:24:57.088 --rc geninfo_all_blocks=1 00:24:57.088 --rc geninfo_unexecuted_blocks=1 00:24:57.088 00:24:57.088 ' 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:57.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.088 --rc genhtml_branch_coverage=1 00:24:57.088 --rc genhtml_function_coverage=1 00:24:57.088 --rc genhtml_legend=1 00:24:57.088 --rc geninfo_all_blocks=1 00:24:57.088 --rc geninfo_unexecuted_blocks=1 00:24:57.088 00:24:57.088 ' 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:57.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.088 --rc genhtml_branch_coverage=1 00:24:57.088 --rc genhtml_function_coverage=1 00:24:57.088 --rc genhtml_legend=1 00:24:57.088 --rc geninfo_all_blocks=1 00:24:57.088 --rc geninfo_unexecuted_blocks=1 00:24:57.088 00:24:57.088 ' 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:57.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.088 --rc genhtml_branch_coverage=1 00:24:57.088 --rc genhtml_function_coverage=1 00:24:57.088 --rc genhtml_legend=1 00:24:57.088 --rc geninfo_all_blocks=1 00:24:57.088 --rc geninfo_unexecuted_blocks=1 00:24:57.088 00:24:57.088 ' 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.088 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.089 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:58.989 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:58.990 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:58.990 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:58.990 Found net devices under 0000:09:00.0: cvl_0_0 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:58.990 Found net devices under 0000:09:00.1: cvl_0_1 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.990 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:24:59.248 00:24:59.248 --- 10.0.0.2 ping statistics --- 00:24:59.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.248 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:24:59.248 00:24:59.248 --- 10.0.0.1 ping statistics --- 00:24:59.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.248 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=2451892 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 2451892 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2451892 ']' 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:59.248 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b91ec5d6bc2e3c21c88721935ec6e863 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.5OQ 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b91ec5d6bc2e3c21c88721935ec6e863 0 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b91ec5d6bc2e3c21c88721935ec6e863 0 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b91ec5d6bc2e3c21c88721935ec6e863 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.5OQ 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.5OQ 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5OQ 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=ea1189a78322673817f1cd67c8189aa4ad5acd750eca4a1b82073003e3921d7c 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.RuK 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key ea1189a78322673817f1cd67c8189aa4ad5acd750eca4a1b82073003e3921d7c 3 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 ea1189a78322673817f1cd67c8189aa4ad5acd750eca4a1b82073003e3921d7c 3 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:59.507 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=ea1189a78322673817f1cd67c8189aa4ad5acd750eca4a1b82073003e3921d7c 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.RuK 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.RuK 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.RuK 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=dbf2bfdf7dcb0cc5f1fe58255db316a1c2ab23b9b54e7254 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.SrY 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key dbf2bfdf7dcb0cc5f1fe58255db316a1c2ab23b9b54e7254 0 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 dbf2bfdf7dcb0cc5f1fe58255db316a1c2ab23b9b54e7254 0 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=dbf2bfdf7dcb0cc5f1fe58255db316a1c2ab23b9b54e7254 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:24:59.508 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.SrY 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.SrY 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.SrY 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=44437c561c8046b6da280cf6dcb16d8290727e67a4289ec0 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.P4H 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 44437c561c8046b6da280cf6dcb16d8290727e67a4289ec0 2 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 44437c561c8046b6da280cf6dcb16d8290727e67a4289ec0 2 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=44437c561c8046b6da280cf6dcb16d8290727e67a4289ec0 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:24:59.766 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.P4H 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.P4H 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.P4H 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=051465dcf1d1bbae0f8fc81d8904c81e 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.BZy 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 051465dcf1d1bbae0f8fc81d8904c81e 1 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 051465dcf1d1bbae0f8fc81d8904c81e 1 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=051465dcf1d1bbae0f8fc81d8904c81e 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.BZy 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.BZy 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.BZy 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d82bf4ce1658dad0ca84f4035457d454 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.nMR 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d82bf4ce1658dad0ca84f4035457d454 1 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d82bf4ce1658dad0ca84f4035457d454 1 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d82bf4ce1658dad0ca84f4035457d454 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.nMR 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.nMR 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.nMR 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1f62fa13f938a8f10a78e85f09b7e6afdad96eed008c5778 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.so7 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1f62fa13f938a8f10a78e85f09b7e6afdad96eed008c5778 2 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1f62fa13f938a8f10a78e85f09b7e6afdad96eed008c5778 2 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1f62fa13f938a8f10a78e85f09b7e6afdad96eed008c5778 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.so7 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.so7 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.so7 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7a8ee9e98977f9f69ef72541e21fb206 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.plk 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7a8ee9e98977f9f69ef72541e21fb206 0 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7a8ee9e98977f9f69ef72541e21fb206 0 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7a8ee9e98977f9f69ef72541e21fb206 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.plk 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.plk 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.plk 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:59.767 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0c82236395061327f584fea7c122d898b0b71c77fd8be7a878d9a02231793f6e 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.nfZ 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0c82236395061327f584fea7c122d898b0b71c77fd8be7a878d9a02231793f6e 3 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0c82236395061327f584fea7c122d898b0b71c77fd8be7a878d9a02231793f6e 3 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0c82236395061327f584fea7c122d898b0b71c77fd8be7a878d9a02231793f6e 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.nfZ 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.nfZ 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.nfZ 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2451892 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2451892 ']' 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:00.026 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5OQ 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.RuK ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RuK 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.SrY 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.P4H ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.P4H 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.BZy 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.nMR ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nMR 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.so7 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.plk ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.plk 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.nfZ 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:00.285 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:01.219 Waiting for block devices as requested 00:25:01.219 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:01.478 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:01.478 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:01.478 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:01.736 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:01.736 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:01.736 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:01.736 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:01.994 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:25:01.994 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:02.252 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:02.252 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:02.252 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:02.252 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:02.510 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:02.510 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:02.510 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:02.769 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:25:02.769 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:02.769 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:25:02.769 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:02.769 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:02.769 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:02.769 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:25:02.769 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:02.769 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:03.028 No valid GPT data, bailing 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:25:03.028 00:25:03.028 Discovery Log Number of Records 2, Generation counter 2 00:25:03.028 =====Discovery Log Entry 0====== 00:25:03.028 trtype: tcp 00:25:03.028 adrfam: ipv4 00:25:03.028 subtype: current discovery subsystem 00:25:03.028 treq: not specified, sq flow control disable supported 00:25:03.028 portid: 1 00:25:03.028 trsvcid: 4420 00:25:03.028 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:03.028 traddr: 10.0.0.1 00:25:03.028 eflags: none 00:25:03.028 sectype: none 00:25:03.028 =====Discovery Log Entry 1====== 00:25:03.028 trtype: tcp 00:25:03.028 adrfam: ipv4 00:25:03.028 subtype: nvme subsystem 00:25:03.028 treq: not specified, sq flow control disable supported 00:25:03.028 portid: 1 00:25:03.028 trsvcid: 4420 00:25:03.028 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:03.028 traddr: 10.0.0.1 00:25:03.028 eflags: none 00:25:03.028 sectype: none 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.028 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.287 nvme0n1 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.287 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.288 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:03.288 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.288 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:03.288 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:03.288 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:03.288 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.288 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.288 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.546 nvme0n1 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.546 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.547 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.804 nvme0n1 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.804 nvme0n1 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.804 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.062 nvme0n1 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.062 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:04.063 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:04.063 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:04.063 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.063 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.063 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.063 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:04.063 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.063 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:04.063 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.063 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.321 nvme0n1 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.321 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.580 nvme0n1 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.580 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.839 nvme0n1 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.839 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.097 nvme0n1 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.097 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.098 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.355 nvme0n1 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.355 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.356 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.613 nvme0n1 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.613 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.871 nvme0n1 00:25:05.871 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.871 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.871 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.871 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.871 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.871 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.129 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.387 nvme0n1 00:25:06.387 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.387 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.387 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.387 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.387 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.387 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.387 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.387 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.387 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.387 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.388 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.646 nvme0n1 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.646 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.213 nvme0n1 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.213 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.214 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.472 nvme0n1 00:25:07.472 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.472 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.472 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.472 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.472 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.472 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.472 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.472 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.472 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.472 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.472 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.038 nvme0n1 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.038 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.604 nvme0n1 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.604 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.170 nvme0n1 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.170 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.738 nvme0n1 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.738 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.304 nvme0n1 00:25:10.304 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.304 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.304 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.304 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.304 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.304 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.304 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.304 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.304 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.304 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.563 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.496 nvme0n1 00:25:11.496 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.496 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.496 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.496 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.496 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.496 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.496 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.430 nvme0n1 00:25:12.430 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.430 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.430 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.430 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.430 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.430 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.430 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.430 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.430 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.430 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.430 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.364 nvme0n1 00:25:13.364 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.364 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.364 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.364 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.364 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.364 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.645 16:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.640 nvme0n1 00:25:14.640 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.640 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.640 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.640 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.640 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.641 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.576 nvme0n1 00:25:15.576 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.576 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.576 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.576 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.576 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.576 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.576 nvme0n1 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.576 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.577 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.836 nvme0n1 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.836 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.095 nvme0n1 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.095 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.354 nvme0n1 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:16.354 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.355 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.613 nvme0n1 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.613 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.614 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:16.614 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.614 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:16.614 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:16.614 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:16.614 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.614 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.614 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.872 nvme0n1 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.872 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.130 nvme0n1 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.130 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.389 nvme0n1 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:17.389 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:17.390 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.390 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.390 16:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.648 nvme0n1 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.648 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.906 nvme0n1 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:17.906 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.907 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.165 nvme0n1 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.165 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.166 16:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.424 nvme0n1 00:25:18.424 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.424 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.424 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.424 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.424 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.424 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.682 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.683 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.941 nvme0n1 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.941 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.200 nvme0n1 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.200 16:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.458 nvme0n1 00:25:19.459 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.459 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.459 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.459 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.459 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.459 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.717 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.283 nvme0n1 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.283 16:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.850 nvme0n1 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.850 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.416 nvme0n1 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.416 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:21.417 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:21.417 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:21.417 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.417 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.417 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:21.417 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.417 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:21.417 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:21.417 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:21.417 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:21.417 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.417 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.983 nvme0n1 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.983 16:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.550 nvme0n1 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.550 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.551 16:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.925 nvme0n1 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.925 16:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.861 nvme0n1 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.861 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.794 nvme0n1 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.795 16:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.730 nvme0n1 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.730 16:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.663 nvme0n1 00:25:27.663 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.663 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.663 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.664 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.922 nvme0n1 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.922 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.181 nvme0n1 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.181 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.439 nvme0n1 00:25:28.439 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.439 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.439 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.439 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.439 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.439 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.439 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.439 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.439 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.439 16:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:28.439 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.440 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.698 nvme0n1 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.698 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.957 nvme0n1 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.957 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.216 nvme0n1 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.216 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.217 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.475 nvme0n1 00:25:29.475 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.475 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.476 16:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.735 nvme0n1 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:29.735 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:29.736 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:29.736 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.736 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.736 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:29.736 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.736 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:29.736 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:29.736 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:29.736 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.736 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.736 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.994 nvme0n1 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.994 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.995 nvme0n1 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.995 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:30.253 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.254 16:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.512 nvme0n1 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.512 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.771 nvme0n1 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.771 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.338 nvme0n1 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.338 16:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.597 nvme0n1 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.597 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.856 nvme0n1 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.856 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.857 16:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.422 nvme0n1 00:25:32.422 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.422 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.422 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.422 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.422 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.422 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.422 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.422 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.422 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.422 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.422 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.423 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.988 nvme0n1 00:25:32.988 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.988 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.988 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.988 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.988 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.988 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.988 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.988 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.988 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.988 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.246 16:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.810 nvme0n1 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.810 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.811 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.374 nvme0n1 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.374 16:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.939 nvme0n1 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkxZWM1ZDZiYzJlM2MyMWM4ODcyMTkzNWVjNmU4NjMni+yS: 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: ]] 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExMTg5YTc4MzIyNjczODE3ZjFjZDY3YzgxODlhYTRhZDVhY2Q3NTBlY2E0YTFiODIwNzMwMDNlMzkyMWQ3Y75T+Sk=: 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:34.939 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:34.940 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.940 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.940 16:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.874 nvme0n1 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.874 16:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.807 nvme0n1 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.807 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.065 16:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.997 nvme0n1 00:25:37.997 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.997 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.997 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.997 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.997 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.997 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.997 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.997 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY2MmZhMTNmOTM4YThmMTBhNzhlODVmMDliN2U2YWZkYWQ5NmVlZDAwOGM1Nzc4QWLP0g==: 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: ]] 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2E4ZWU5ZTk4OTc3ZjlmNjllZjcyNTQxZTIxZmIyMDakcBo3: 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.998 16:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.931 nvme0n1 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.931 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGM4MjIzNjM5NTA2MTMyN2Y1ODRmZWE3YzEyMmQ4OThiMGI3MWM3N2ZkOGJlN2E4NzhkOWEwMjIzMTc5M2Y2ZezLM2I=: 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.932 16:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.306 nvme0n1 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.306 request: 00:25:40.306 { 00:25:40.306 "name": "nvme0", 00:25:40.306 "trtype": "tcp", 00:25:40.306 "traddr": "10.0.0.1", 00:25:40.306 "adrfam": "ipv4", 00:25:40.306 "trsvcid": "4420", 00:25:40.306 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:40.306 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:40.306 "prchk_reftag": false, 00:25:40.306 "prchk_guard": false, 00:25:40.306 "hdgst": false, 00:25:40.306 "ddgst": false, 00:25:40.306 "allow_unrecognized_csi": false, 00:25:40.306 "method": "bdev_nvme_attach_controller", 00:25:40.306 "req_id": 1 00:25:40.306 } 00:25:40.306 Got JSON-RPC error response 00:25:40.306 response: 00:25:40.306 { 00:25:40.306 "code": -5, 00:25:40.306 "message": "Input/output error" 00:25:40.306 } 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:40.306 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.307 request: 00:25:40.307 { 00:25:40.307 "name": "nvme0", 00:25:40.307 "trtype": "tcp", 00:25:40.307 "traddr": "10.0.0.1", 00:25:40.307 "adrfam": "ipv4", 00:25:40.307 "trsvcid": "4420", 00:25:40.307 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:40.307 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:40.307 "prchk_reftag": false, 00:25:40.307 "prchk_guard": false, 00:25:40.307 "hdgst": false, 00:25:40.307 "ddgst": false, 00:25:40.307 "dhchap_key": "key2", 00:25:40.307 "allow_unrecognized_csi": false, 00:25:40.307 "method": "bdev_nvme_attach_controller", 00:25:40.307 "req_id": 1 00:25:40.307 } 00:25:40.307 Got JSON-RPC error response 00:25:40.307 response: 00:25:40.307 { 00:25:40.307 "code": -5, 00:25:40.307 "message": "Input/output error" 00:25:40.307 } 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.307 request: 00:25:40.307 { 00:25:40.307 "name": "nvme0", 00:25:40.307 "trtype": "tcp", 00:25:40.307 "traddr": "10.0.0.1", 00:25:40.307 "adrfam": "ipv4", 00:25:40.307 "trsvcid": "4420", 00:25:40.307 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:40.307 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:40.307 "prchk_reftag": false, 00:25:40.307 "prchk_guard": false, 00:25:40.307 "hdgst": false, 00:25:40.307 "ddgst": false, 00:25:40.307 "dhchap_key": "key1", 00:25:40.307 "dhchap_ctrlr_key": "ckey2", 00:25:40.307 "allow_unrecognized_csi": false, 00:25:40.307 "method": "bdev_nvme_attach_controller", 00:25:40.307 "req_id": 1 00:25:40.307 } 00:25:40.307 Got JSON-RPC error response 00:25:40.307 response: 00:25:40.307 { 00:25:40.307 "code": -5, 00:25:40.307 "message": "Input/output error" 00:25:40.307 } 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.307 16:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.566 nvme0n1 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.566 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.825 request: 00:25:40.825 { 00:25:40.825 "name": "nvme0", 00:25:40.825 "dhchap_key": "key1", 00:25:40.825 "dhchap_ctrlr_key": "ckey2", 00:25:40.825 "method": "bdev_nvme_set_keys", 00:25:40.825 "req_id": 1 00:25:40.825 } 00:25:40.825 Got JSON-RPC error response 00:25:40.825 response: 00:25:40.825 { 00:25:40.825 "code": -13, 00:25:40.825 "message": "Permission denied" 00:25:40.825 } 00:25:40.825 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:40.825 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:40.825 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:40.825 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:40.825 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:40.825 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.825 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:40.825 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.825 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.825 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.825 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:40.825 16:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:41.759 16:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.759 16:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:41.759 16:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.759 16:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.759 16:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.759 16:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:41.759 16:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGJmMmJmZGY3ZGNiMGNjNWYxZmU1ODI1NWRiMzE2YTFjMmFiMjNiOWI1NGU3MjU0NvpwNA==: 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: ]] 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDQ0MzdjNTYxYzgwNDZiNmRhMjgwY2Y2ZGNiMTZkODI5MDcyN2U2N2E0Mjg5ZWMwDx7zWA==: 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.134 nvme0n1 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDUxNDY1ZGNmMWQxYmJhZTBmOGZjODFkODkwNGM4MWW8rY0l: 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: ]] 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDgyYmY0Y2UxNjU4ZGFkMGNhODRmNDAzNTQ1N2Q0NTSM3ZQk: 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.134 request: 00:25:43.134 { 00:25:43.134 "name": "nvme0", 00:25:43.134 "dhchap_key": "key2", 00:25:43.134 "dhchap_ctrlr_key": "ckey1", 00:25:43.134 "method": "bdev_nvme_set_keys", 00:25:43.134 "req_id": 1 00:25:43.134 } 00:25:43.134 Got JSON-RPC error response 00:25:43.134 response: 00:25:43.134 { 00:25:43.134 "code": -13, 00:25:43.134 "message": "Permission denied" 00:25:43.134 } 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:43.134 16:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.158 rmmod nvme_tcp 00:25:44.158 rmmod nvme_fabrics 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 2451892 ']' 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 2451892 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2451892 ']' 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2451892 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:44.158 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2451892 00:25:44.417 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:44.417 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:44.417 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2451892' 00:25:44.417 killing process with pid 2451892 00:25:44.417 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2451892 00:25:44.417 16:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2451892 00:25:44.417 16:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:44.417 16:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:44.417 16:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:44.417 16:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:44.417 16:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:25:44.417 16:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:44.417 16:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:25:44.417 16:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.417 16:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:44.417 16:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.417 16:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.417 16:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:25:46.953 16:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:47.889 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:47.889 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:47.889 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:47.889 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:47.889 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:47.889 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:47.889 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:47.889 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:47.889 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:47.889 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:47.889 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:47.889 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:47.889 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:47.889 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:47.889 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:47.889 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:48.826 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:25:48.826 16:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5OQ /tmp/spdk.key-null.SrY /tmp/spdk.key-sha256.BZy /tmp/spdk.key-sha384.so7 /tmp/spdk.key-sha512.nfZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:48.826 16:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:50.201 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:50.201 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:50.201 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:50.201 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:50.201 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:50.201 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:50.201 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:50.201 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:50.201 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:50.201 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:50.201 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:50.201 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:50.201 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:50.201 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:50.201 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:50.201 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:50.201 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:50.201 00:25:50.201 real 0m53.329s 00:25:50.201 user 0m50.225s 00:25:50.201 sys 0m5.867s 00:25:50.201 16:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:50.201 16:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.201 ************************************ 00:25:50.201 END TEST nvmf_auth_host 00:25:50.201 ************************************ 00:25:50.201 16:54:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:50.201 16:54:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:50.201 16:54:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:50.201 16:54:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:50.201 16:54:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.201 ************************************ 00:25:50.201 START TEST nvmf_digest 00:25:50.201 ************************************ 00:25:50.201 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:50.201 * Looking for test storage... 00:25:50.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:50.201 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:50.201 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:25:50.201 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:50.460 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:50.460 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.460 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.460 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.460 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.460 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.460 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.461 --rc genhtml_branch_coverage=1 00:25:50.461 --rc genhtml_function_coverage=1 00:25:50.461 --rc genhtml_legend=1 00:25:50.461 --rc geninfo_all_blocks=1 00:25:50.461 --rc geninfo_unexecuted_blocks=1 00:25:50.461 00:25:50.461 ' 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.461 --rc genhtml_branch_coverage=1 00:25:50.461 --rc genhtml_function_coverage=1 00:25:50.461 --rc genhtml_legend=1 00:25:50.461 --rc geninfo_all_blocks=1 00:25:50.461 --rc geninfo_unexecuted_blocks=1 00:25:50.461 00:25:50.461 ' 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.461 --rc genhtml_branch_coverage=1 00:25:50.461 --rc genhtml_function_coverage=1 00:25:50.461 --rc genhtml_legend=1 00:25:50.461 --rc geninfo_all_blocks=1 00:25:50.461 --rc geninfo_unexecuted_blocks=1 00:25:50.461 00:25:50.461 ' 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:50.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.461 --rc genhtml_branch_coverage=1 00:25:50.461 --rc genhtml_function_coverage=1 00:25:50.461 --rc genhtml_legend=1 00:25:50.461 --rc geninfo_all_blocks=1 00:25:50.461 --rc geninfo_unexecuted_blocks=1 00:25:50.461 00:25:50.461 ' 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:50.461 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:52.365 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:52.365 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:52.365 Found net devices under 0000:09:00.0: cvl_0_0 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:52.365 Found net devices under 0000:09:00.1: cvl_0_1 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:52.365 16:54:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:52.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:25:52.365 00:25:52.365 --- 10.0.0.2 ping statistics --- 00:25:52.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.365 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:25:52.365 00:25:52.365 --- 10.0.0.1 ping statistics --- 00:25:52.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.365 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:52.365 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:52.366 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:52.366 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:52.624 ************************************ 00:25:52.624 START TEST nvmf_digest_clean 00:25:52.624 ************************************ 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=2462597 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 2462597 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2462597 ']' 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:52.624 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:52.624 [2024-10-17 16:54:06.123789] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:25:52.624 [2024-10-17 16:54:06.123884] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.624 [2024-10-17 16:54:06.200896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.624 [2024-10-17 16:54:06.273974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.624 [2024-10-17 16:54:06.274058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.624 [2024-10-17 16:54:06.274098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.624 [2024-10-17 16:54:06.274121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.624 [2024-10-17 16:54:06.274140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.624 [2024-10-17 16:54:06.274939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.883 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:52.883 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:52.883 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:52.883 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:52.883 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:52.883 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.883 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:52.883 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:52.883 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:52.883 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.883 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:53.142 null0 00:25:53.142 [2024-10-17 16:54:06.591317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.142 [2024-10-17 16:54:06.615562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2462618 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2462618 /var/tmp/bperf.sock 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2462618 ']' 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:53.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:53.142 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:53.142 [2024-10-17 16:54:06.666324] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:25:53.142 [2024-10-17 16:54:06.666400] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2462618 ] 00:25:53.142 [2024-10-17 16:54:06.727521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.142 [2024-10-17 16:54:06.787858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.400 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:53.400 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:53.400 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:53.400 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:53.400 16:54:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:53.663 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:53.663 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:54.231 nvme0n1 00:25:54.231 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:54.231 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:54.231 Running I/O for 2 seconds... 00:25:56.099 17629.00 IOPS, 68.86 MiB/s [2024-10-17T14:54:10.048Z] 18446.50 IOPS, 72.06 MiB/s 00:25:56.358 Latency(us) 00:25:56.358 [2024-10-17T14:54:10.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.358 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:56.358 nvme0n1 : 2.04 18092.17 70.67 0.00 0.00 6930.99 3422.44 43496.49 00:25:56.358 [2024-10-17T14:54:10.048Z] =================================================================================================================== 00:25:56.358 [2024-10-17T14:54:10.048Z] Total : 18092.17 70.67 0.00 0.00 6930.99 3422.44 43496.49 00:25:56.358 { 00:25:56.358 "results": [ 00:25:56.358 { 00:25:56.358 "job": "nvme0n1", 00:25:56.358 "core_mask": "0x2", 00:25:56.358 "workload": "randread", 00:25:56.358 "status": "finished", 00:25:56.358 "queue_depth": 128, 00:25:56.358 "io_size": 4096, 00:25:56.358 "runtime": 2.043149, 00:25:56.358 "iops": 18092.170468233104, 00:25:56.358 "mibps": 70.67254089153556, 00:25:56.358 "io_failed": 0, 00:25:56.358 "io_timeout": 0, 00:25:56.358 "avg_latency_us": 6930.993014372956, 00:25:56.358 "min_latency_us": 3422.4355555555558, 00:25:56.358 "max_latency_us": 43496.485925925925 00:25:56.358 } 00:25:56.358 ], 00:25:56.358 "core_count": 1 00:25:56.358 } 00:25:56.358 16:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:56.358 16:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:56.358 16:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:56.358 16:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:56.358 | select(.opcode=="crc32c") 00:25:56.358 | "\(.module_name) \(.executed)"' 00:25:56.358 16:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2462618 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2462618 ']' 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2462618 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2462618 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2462618' 00:25:56.616 killing process with pid 2462618 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2462618 00:25:56.616 Received shutdown signal, test time was about 2.000000 seconds 00:25:56.616 00:25:56.616 Latency(us) 00:25:56.616 [2024-10-17T14:54:10.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.616 [2024-10-17T14:54:10.306Z] =================================================================================================================== 00:25:56.616 [2024-10-17T14:54:10.306Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:56.616 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2462618 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2463594 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2463594 /var/tmp/bperf.sock 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2463594 ']' 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:56.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:56.875 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:56.875 [2024-10-17 16:54:10.383872] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:25:56.875 [2024-10-17 16:54:10.383953] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463594 ] 00:25:56.875 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:56.875 Zero copy mechanism will not be used. 00:25:56.875 [2024-10-17 16:54:10.447591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.875 [2024-10-17 16:54:10.510133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.133 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:57.133 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:57.133 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:57.133 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:57.133 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:57.391 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.391 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.649 nvme0n1 00:25:57.649 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:57.649 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:57.908 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:57.908 Zero copy mechanism will not be used. 00:25:57.908 Running I/O for 2 seconds... 00:25:59.776 6198.00 IOPS, 774.75 MiB/s [2024-10-17T14:54:13.466Z] 6118.00 IOPS, 764.75 MiB/s 00:25:59.776 Latency(us) 00:25:59.776 [2024-10-17T14:54:13.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.776 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:59.776 nvme0n1 : 2.00 6118.75 764.84 0.00 0.00 2610.72 709.97 8786.68 00:25:59.776 [2024-10-17T14:54:13.466Z] =================================================================================================================== 00:25:59.776 [2024-10-17T14:54:13.466Z] Total : 6118.75 764.84 0.00 0.00 2610.72 709.97 8786.68 00:25:59.776 { 00:25:59.776 "results": [ 00:25:59.776 { 00:25:59.776 "job": "nvme0n1", 00:25:59.776 "core_mask": "0x2", 00:25:59.776 "workload": "randread", 00:25:59.776 "status": "finished", 00:25:59.776 "queue_depth": 16, 00:25:59.776 "io_size": 131072, 00:25:59.776 "runtime": 2.00237, 00:25:59.776 "iops": 6118.749282100711, 00:25:59.776 "mibps": 764.8436602625889, 00:25:59.776 "io_failed": 0, 00:25:59.776 "io_timeout": 0, 00:25:59.776 "avg_latency_us": 2610.7161132271676, 00:25:59.776 "min_latency_us": 709.9733333333334, 00:25:59.776 "max_latency_us": 8786.678518518518 00:25:59.776 } 00:25:59.776 ], 00:25:59.776 "core_count": 1 00:25:59.776 } 00:25:59.776 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:59.776 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:59.776 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:59.776 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:59.776 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:59.776 | select(.opcode=="crc32c") 00:25:59.776 | "\(.module_name) \(.executed)"' 00:26:00.034 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:00.034 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:00.034 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:00.034 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:00.034 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2463594 00:26:00.034 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2463594 ']' 00:26:00.034 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2463594 00:26:00.034 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:00.034 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:00.034 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2463594 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2463594' 00:26:00.292 killing process with pid 2463594 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2463594 00:26:00.292 Received shutdown signal, test time was about 2.000000 seconds 00:26:00.292 00:26:00.292 Latency(us) 00:26:00.292 [2024-10-17T14:54:13.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.292 [2024-10-17T14:54:13.982Z] =================================================================================================================== 00:26:00.292 [2024-10-17T14:54:13.982Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2463594 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2464058 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2464058 /var/tmp/bperf.sock 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2464058 ']' 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:00.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:00.292 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:00.551 [2024-10-17 16:54:14.005744] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:00.551 [2024-10-17 16:54:14.005823] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464058 ] 00:26:00.551 [2024-10-17 16:54:14.067992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.551 [2024-10-17 16:54:14.129374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.551 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:00.551 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:00.551 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:00.551 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:00.551 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:01.116 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.117 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.375 nvme0n1 00:26:01.375 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:01.375 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:01.375 Running I/O for 2 seconds... 00:26:03.680 16535.00 IOPS, 64.59 MiB/s [2024-10-17T14:54:17.370Z] 16579.50 IOPS, 64.76 MiB/s 00:26:03.680 Latency(us) 00:26:03.680 [2024-10-17T14:54:17.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.680 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:03.680 nvme0n1 : 2.01 16584.19 64.78 0.00 0.00 7699.80 5898.24 14272.28 00:26:03.680 [2024-10-17T14:54:17.370Z] =================================================================================================================== 00:26:03.680 [2024-10-17T14:54:17.370Z] Total : 16584.19 64.78 0.00 0.00 7699.80 5898.24 14272.28 00:26:03.680 { 00:26:03.680 "results": [ 00:26:03.680 { 00:26:03.680 "job": "nvme0n1", 00:26:03.680 "core_mask": "0x2", 00:26:03.680 "workload": "randwrite", 00:26:03.680 "status": "finished", 00:26:03.680 "queue_depth": 128, 00:26:03.680 "io_size": 4096, 00:26:03.680 "runtime": 2.009082, 00:26:03.680 "iops": 16584.191187816126, 00:26:03.680 "mibps": 64.78199682740674, 00:26:03.680 "io_failed": 0, 00:26:03.680 "io_timeout": 0, 00:26:03.680 "avg_latency_us": 7699.800135747259, 00:26:03.680 "min_latency_us": 5898.24, 00:26:03.680 "max_latency_us": 14272.284444444444 00:26:03.680 } 00:26:03.680 ], 00:26:03.680 "core_count": 1 00:26:03.680 } 00:26:03.680 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:03.680 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:03.680 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:03.680 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:03.680 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:03.680 | select(.opcode=="crc32c") 00:26:03.680 | "\(.module_name) \(.executed)"' 00:26:03.680 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:03.680 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:03.680 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:03.680 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:03.680 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2464058 00:26:03.680 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2464058 ']' 00:26:03.938 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2464058 00:26:03.938 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:03.939 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:03.939 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2464058 00:26:03.939 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:03.939 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:03.939 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2464058' 00:26:03.939 killing process with pid 2464058 00:26:03.939 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2464058 00:26:03.939 Received shutdown signal, test time was about 2.000000 seconds 00:26:03.939 00:26:03.939 Latency(us) 00:26:03.939 [2024-10-17T14:54:17.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.939 [2024-10-17T14:54:17.629Z] =================================================================================================================== 00:26:03.939 [2024-10-17T14:54:17.629Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:03.939 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2464058 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2464470 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2464470 /var/tmp/bperf.sock 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2464470 ']' 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:04.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:04.197 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:04.197 [2024-10-17 16:54:17.683150] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:04.197 [2024-10-17 16:54:17.683233] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464470 ] 00:26:04.197 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:04.197 Zero copy mechanism will not be used. 00:26:04.197 [2024-10-17 16:54:17.743808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.197 [2024-10-17 16:54:17.804815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.464 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:04.464 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:04.464 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:04.464 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:04.464 16:54:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:04.722 16:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.722 16:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.980 nvme0n1 00:26:04.980 16:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:04.980 16:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:05.239 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:05.239 Zero copy mechanism will not be used. 00:26:05.239 Running I/O for 2 seconds... 00:26:07.108 6603.00 IOPS, 825.38 MiB/s [2024-10-17T14:54:20.798Z] 6300.50 IOPS, 787.56 MiB/s 00:26:07.108 Latency(us) 00:26:07.108 [2024-10-17T14:54:20.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.108 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:07.108 nvme0n1 : 2.00 6294.99 786.87 0.00 0.00 2530.55 1759.76 8009.96 00:26:07.108 [2024-10-17T14:54:20.798Z] =================================================================================================================== 00:26:07.108 [2024-10-17T14:54:20.798Z] Total : 6294.99 786.87 0.00 0.00 2530.55 1759.76 8009.96 00:26:07.108 { 00:26:07.108 "results": [ 00:26:07.108 { 00:26:07.108 "job": "nvme0n1", 00:26:07.108 "core_mask": "0x2", 00:26:07.108 "workload": "randwrite", 00:26:07.108 "status": "finished", 00:26:07.108 "queue_depth": 16, 00:26:07.108 "io_size": 131072, 00:26:07.108 "runtime": 2.004768, 00:26:07.108 "iops": 6294.992737314243, 00:26:07.108 "mibps": 786.8740921642803, 00:26:07.108 "io_failed": 0, 00:26:07.108 "io_timeout": 0, 00:26:07.108 "avg_latency_us": 2530.552411809591, 00:26:07.108 "min_latency_us": 1759.762962962963, 00:26:07.108 "max_latency_us": 8009.955555555555 00:26:07.108 } 00:26:07.108 ], 00:26:07.108 "core_count": 1 00:26:07.108 } 00:26:07.108 16:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:07.108 16:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:07.108 16:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:07.108 16:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:07.108 16:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:07.108 | select(.opcode=="crc32c") 00:26:07.108 | "\(.module_name) \(.executed)"' 00:26:07.367 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:07.367 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:07.367 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:07.367 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:07.367 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2464470 00:26:07.367 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2464470 ']' 00:26:07.367 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2464470 00:26:07.367 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:07.367 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.367 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2464470 00:26:07.626 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:07.626 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:07.626 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2464470' 00:26:07.626 killing process with pid 2464470 00:26:07.626 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2464470 00:26:07.626 Received shutdown signal, test time was about 2.000000 seconds 00:26:07.626 00:26:07.626 Latency(us) 00:26:07.626 [2024-10-17T14:54:21.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.626 [2024-10-17T14:54:21.316Z] =================================================================================================================== 00:26:07.626 [2024-10-17T14:54:21.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.626 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2464470 00:26:07.626 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2462597 00:26:07.626 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2462597 ']' 00:26:07.626 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2462597 00:26:07.626 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:07.626 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.626 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2462597 00:26:07.886 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:07.886 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:07.886 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2462597' 00:26:07.886 killing process with pid 2462597 00:26:07.886 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2462597 00:26:07.886 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2462597 00:26:07.886 00:26:07.886 real 0m15.500s 00:26:07.886 user 0m29.941s 00:26:07.886 sys 0m4.532s 00:26:07.886 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:07.886 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:07.886 ************************************ 00:26:07.886 END TEST nvmf_digest_clean 00:26:07.886 ************************************ 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:08.145 ************************************ 00:26:08.145 START TEST nvmf_digest_error 00:26:08.145 ************************************ 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=2464913 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 2464913 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2464913 ']' 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:08.145 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.145 [2024-10-17 16:54:21.676163] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:08.145 [2024-10-17 16:54:21.676255] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.145 [2024-10-17 16:54:21.743645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.145 [2024-10-17 16:54:21.807176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.145 [2024-10-17 16:54:21.807231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.145 [2024-10-17 16:54:21.807259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.145 [2024-10-17 16:54:21.807271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.145 [2024-10-17 16:54:21.807281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.145 [2024-10-17 16:54:21.807942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.404 [2024-10-17 16:54:21.968797] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.404 16:54:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.404 null0 00:26:08.404 [2024-10-17 16:54:22.093712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.662 [2024-10-17 16:54:22.117929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2465050 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2465050 /var/tmp/bperf.sock 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2465050 ']' 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:08.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:08.662 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.662 [2024-10-17 16:54:22.169212] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:08.662 [2024-10-17 16:54:22.169288] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465050 ] 00:26:08.662 [2024-10-17 16:54:22.230086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.662 [2024-10-17 16:54:22.293151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.920 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:08.920 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:08.920 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:08.920 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:09.178 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:09.178 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.178 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:09.178 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.178 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.178 16:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.745 nvme0n1 00:26:09.745 16:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:09.745 16:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.745 16:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:09.745 16:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.745 16:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:09.745 16:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:09.745 Running I/O for 2 seconds... 00:26:09.745 [2024-10-17 16:54:23.286069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:09.745 [2024-10-17 16:54:23.286121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.745 [2024-10-17 16:54:23.286141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.745 [2024-10-17 16:54:23.303365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:09.745 [2024-10-17 16:54:23.303402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.745 [2024-10-17 16:54:23.303421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.745 [2024-10-17 16:54:23.317908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:09.745 [2024-10-17 16:54:23.317944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.745 [2024-10-17 16:54:23.317963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.745 [2024-10-17 16:54:23.330171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:09.745 [2024-10-17 16:54:23.330201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.745 [2024-10-17 16:54:23.330217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.746 [2024-10-17 16:54:23.346179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:09.746 [2024-10-17 16:54:23.346208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-10-17 16:54:23.346225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.746 [2024-10-17 16:54:23.362772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:09.746 [2024-10-17 16:54:23.362807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-10-17 16:54:23.362826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.746 [2024-10-17 16:54:23.375105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:09.746 [2024-10-17 16:54:23.375133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-10-17 16:54:23.375148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.746 [2024-10-17 16:54:23.389160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:09.746 [2024-10-17 16:54:23.389190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-10-17 16:54:23.389206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.746 [2024-10-17 16:54:23.404555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:09.746 [2024-10-17 16:54:23.404589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-10-17 16:54:23.404607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.746 [2024-10-17 16:54:23.417133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:09.746 [2024-10-17 16:54:23.417163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-10-17 16:54:23.417180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.746 [2024-10-17 16:54:23.430925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:09.746 [2024-10-17 16:54:23.430958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.746 [2024-10-17 16:54:23.430977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.004 [2024-10-17 16:54:23.444295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.004 [2024-10-17 16:54:23.444322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.004 [2024-10-17 16:54:23.444337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.004 [2024-10-17 16:54:23.462019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.004 [2024-10-17 16:54:23.462064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.004 [2024-10-17 16:54:23.462079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.004 [2024-10-17 16:54:23.475698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.004 [2024-10-17 16:54:23.475731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.004 [2024-10-17 16:54:23.475760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.004 [2024-10-17 16:54:23.492154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.004 [2024-10-17 16:54:23.492185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.004 [2024-10-17 16:54:23.492202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.004 [2024-10-17 16:54:23.503988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.504031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.504063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.520178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.520208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.520224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.532679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.532713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.532732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.546565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.546600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.546619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.561123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.561152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.561182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.576014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.576059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.576074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.590817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.590851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.590870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.603278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.603329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.603348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.618389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.618422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.618440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.632099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.632126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.632141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.647760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.647792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.647811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.663596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.663630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.663648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.675114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.675159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.675175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.005 [2024-10-17 16:54:23.691489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.005 [2024-10-17 16:54:23.691522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.005 [2024-10-17 16:54:23.691541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.263 [2024-10-17 16:54:23.708051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.263 [2024-10-17 16:54:23.708079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.263 [2024-10-17 16:54:23.708095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.263 [2024-10-17 16:54:23.722147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.263 [2024-10-17 16:54:23.722178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.263 [2024-10-17 16:54:23.722194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.263 [2024-10-17 16:54:23.734250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.263 [2024-10-17 16:54:23.734280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.263 [2024-10-17 16:54:23.734313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.263 [2024-10-17 16:54:23.747862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.263 [2024-10-17 16:54:23.747897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.263 [2024-10-17 16:54:23.747915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.263 [2024-10-17 16:54:23.762101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.762129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.762145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.264 [2024-10-17 16:54:23.778488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.778524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.778542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.264 [2024-10-17 16:54:23.795346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.795381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.795400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.264 [2024-10-17 16:54:23.812288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.812324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.812342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.264 [2024-10-17 16:54:23.827804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.827839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.827857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.264 [2024-10-17 16:54:23.840273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.840322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.840341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.264 [2024-10-17 16:54:23.856080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.856108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.856129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.264 [2024-10-17 16:54:23.869504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.869540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.869559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.264 [2024-10-17 16:54:23.885530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.885565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.885583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.264 [2024-10-17 16:54:23.899436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.899470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.899488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.264 [2024-10-17 16:54:23.912842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.912878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.912896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.264 [2024-10-17 16:54:23.924642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.924676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.924695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.264 [2024-10-17 16:54:23.942123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.264 [2024-10-17 16:54:23.942153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.264 [2024-10-17 16:54:23.942185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:23.957983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:23.958029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:23.958068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:23.970906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:23.970941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:23.970960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:23.984454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:23.984488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:23.984506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.001922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.001956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.001974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.014084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.014114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.014131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.029054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.029092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.029108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.047638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.047672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.047691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.059822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.059855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.059873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.075656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.075689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.075707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.093968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.094009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.094045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.109976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.110018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.110045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.127253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.127313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.127333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.140214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.140243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.140258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.158055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.158083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.158099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.172661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.172696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.172714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.185877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.185911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.185930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.523 [2024-10-17 16:54:24.200259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.523 [2024-10-17 16:54:24.200289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.523 [2024-10-17 16:54:24.200321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.214020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.214058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.214075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.226804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.226834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.226850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.239403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.239439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.239456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.251857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.251901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.251917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 17292.00 IOPS, 67.55 MiB/s [2024-10-17T14:54:24.472Z] [2024-10-17 16:54:24.265560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.265589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.265604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.280598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.280643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.280659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.291305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.291349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.291365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.305841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.305871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.305887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.316571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.316602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.316636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.332075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.332105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.332121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.345512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.345541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.345557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.360425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.360454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.360471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.371620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.371649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.371664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.386493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.386523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.386539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.398885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.398913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.398928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.413738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.413767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.782 [2024-10-17 16:54:24.413782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.782 [2024-10-17 16:54:24.427600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.782 [2024-10-17 16:54:24.427629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.783 [2024-10-17 16:54:24.427645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.783 [2024-10-17 16:54:24.444906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.783 [2024-10-17 16:54:24.444936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.783 [2024-10-17 16:54:24.444952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.783 [2024-10-17 16:54:24.455485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.783 [2024-10-17 16:54:24.455513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.783 [2024-10-17 16:54:24.455528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.783 [2024-10-17 16:54:24.469567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:10.783 [2024-10-17 16:54:24.469597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.783 [2024-10-17 16:54:24.469619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.484431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.484460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.484476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.496497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.496526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.496541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.510191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.510220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.510236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.524097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.524127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.524143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.537127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.537157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.537173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.548942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.548970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.549008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.561367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.561397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.561413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.574435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.574464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.574480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.588343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.588373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.588389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.599813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.599841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.599856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.613510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.613538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.613553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.626890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.626919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.626936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.641018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.641047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.641063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.653127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.653157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.653175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.665794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.665823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.665838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.680710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.680738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.680752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.691774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.691802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.691824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.707085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.707113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.707129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.042 [2024-10-17 16:54:24.720884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.042 [2024-10-17 16:54:24.720913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.042 [2024-10-17 16:54:24.720943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.301 [2024-10-17 16:54:24.732561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.301 [2024-10-17 16:54:24.732590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.301 [2024-10-17 16:54:24.732606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.301 [2024-10-17 16:54:24.747031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.301 [2024-10-17 16:54:24.747060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.301 [2024-10-17 16:54:24.747075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.301 [2024-10-17 16:54:24.760156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.301 [2024-10-17 16:54:24.760186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.301 [2024-10-17 16:54:24.760203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.301 [2024-10-17 16:54:24.771592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.301 [2024-10-17 16:54:24.771620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.301 [2024-10-17 16:54:24.771636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.301 [2024-10-17 16:54:24.786312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.301 [2024-10-17 16:54:24.786341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.301 [2024-10-17 16:54:24.786357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.798203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.798234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.798251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.811658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.811694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.811710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.823590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.823619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.823635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.835845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.835876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.835907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.848120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.848151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.848168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.861233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.861265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.861297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.874075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.874105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.874120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.886804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.886833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.886849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.901639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.901670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.901701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.913236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.913264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.913280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.928553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.928583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.928599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.944158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.944189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.944206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.955404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.955434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.955451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.966915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.966943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.966959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.302 [2024-10-17 16:54:24.980146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.302 [2024-10-17 16:54:24.980176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.302 [2024-10-17 16:54:24.980192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:24.994809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:24.994841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:24.994857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.006229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.006258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.006273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.021526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.021555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.021570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.033908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.033937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.033959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.046357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.046387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.046402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.058967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.058997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.059037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.071786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.071816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.071831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.085351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.085380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.085395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.097948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.097977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.097992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.110626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.110655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.110670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.123290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.123338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.123354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.135814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.135843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.135858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.148032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.148063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.148080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.161699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.161727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.161743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.172874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.172904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.172919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.185621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.185650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.185665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.197931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.197959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.197974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.210630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.210658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.210672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.222853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.222881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.222897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.561 [2024-10-17 16:54:25.237722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.561 [2024-10-17 16:54:25.237752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.561 [2024-10-17 16:54:25.237767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.819 [2024-10-17 16:54:25.252192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.819 [2024-10-17 16:54:25.252222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.819 [2024-10-17 16:54:25.252250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.819 [2024-10-17 16:54:25.264456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0b00) 00:26:11.819 [2024-10-17 16:54:25.264485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.819 [2024-10-17 16:54:25.264501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.819 18367.50 IOPS, 71.75 MiB/s 00:26:11.819 Latency(us) 00:26:11.819 [2024-10-17T14:54:25.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.819 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:11.819 nvme0n1 : 2.01 18400.62 71.88 0.00 0.00 6948.22 3349.62 23592.96 00:26:11.819 [2024-10-17T14:54:25.509Z] =================================================================================================================== 00:26:11.819 [2024-10-17T14:54:25.509Z] Total : 18400.62 71.88 0.00 0.00 6948.22 3349.62 23592.96 00:26:11.819 { 00:26:11.819 "results": [ 00:26:11.819 { 00:26:11.819 "job": "nvme0n1", 00:26:11.819 "core_mask": "0x2", 00:26:11.819 "workload": "randread", 00:26:11.819 "status": "finished", 00:26:11.819 "queue_depth": 128, 00:26:11.819 "io_size": 4096, 00:26:11.820 "runtime": 2.005204, 00:26:11.820 "iops": 18400.621582641965, 00:26:11.820 "mibps": 71.87742805719517, 00:26:11.820 "io_failed": 0, 00:26:11.820 "io_timeout": 0, 00:26:11.820 "avg_latency_us": 6948.221227661789, 00:26:11.820 "min_latency_us": 3349.617777777778, 00:26:11.820 "max_latency_us": 23592.96 00:26:11.820 } 00:26:11.820 ], 00:26:11.820 "core_count": 1 00:26:11.820 } 00:26:11.820 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:11.820 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:11.820 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:11.820 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:11.820 | .driver_specific 00:26:11.820 | .nvme_error 00:26:11.820 | .status_code 00:26:11.820 | .command_transient_transport_error' 00:26:12.078 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:26:12.078 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2465050 00:26:12.078 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2465050 ']' 00:26:12.078 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2465050 00:26:12.078 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:12.078 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:12.078 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2465050 00:26:12.078 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:12.078 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:12.078 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2465050' 00:26:12.078 killing process with pid 2465050 00:26:12.078 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2465050 00:26:12.078 Received shutdown signal, test time was about 2.000000 seconds 00:26:12.078 00:26:12.078 Latency(us) 00:26:12.078 [2024-10-17T14:54:25.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.078 [2024-10-17T14:54:25.768Z] =================================================================================================================== 00:26:12.078 [2024-10-17T14:54:25.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.078 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2465050 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2465455 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2465455 /var/tmp/bperf.sock 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2465455 ']' 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:12.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:12.337 16:54:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.337 [2024-10-17 16:54:25.869494] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:12.337 [2024-10-17 16:54:25.869588] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465455 ] 00:26:12.337 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:12.337 Zero copy mechanism will not be used. 00:26:12.337 [2024-10-17 16:54:25.931802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.337 [2024-10-17 16:54:25.992273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.596 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:12.596 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:12.596 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:12.596 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:12.854 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:12.854 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.854 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.854 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.854 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.854 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:13.421 nvme0n1 00:26:13.421 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:13.421 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.421 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.421 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.421 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:13.421 16:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:13.421 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:13.421 Zero copy mechanism will not be used. 00:26:13.421 Running I/O for 2 seconds... 00:26:13.421 [2024-10-17 16:54:26.971103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:26.971166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:26.971187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:26.977249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:26.977299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:26.977317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:26.983372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:26.983410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:26.983437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:26.989349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:26.989386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:26.989407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:26.995390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:26.995427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:26.995451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.001406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.001443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.001469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.007460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.007505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.007535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.013644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.013680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.013700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.020444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.020480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.020500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.027264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.027316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.027340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.031892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.031929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.031959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.039707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.039744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.039765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.047442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.047479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.047498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.055225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.055256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.055273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.062823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.062859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.062879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.071419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.071456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.071475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.079721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.079758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.079778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.087997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.088065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.088082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.421 [2024-10-17 16:54:27.095801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.421 [2024-10-17 16:54:27.095838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.421 [2024-10-17 16:54:27.095858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.422 [2024-10-17 16:54:27.103637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.422 [2024-10-17 16:54:27.103675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.422 [2024-10-17 16:54:27.103696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.690 [2024-10-17 16:54:27.111466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.690 [2024-10-17 16:54:27.111502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.690 [2024-10-17 16:54:27.111522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.690 [2024-10-17 16:54:27.118642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.690 [2024-10-17 16:54:27.118679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.690 [2024-10-17 16:54:27.118699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.125053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.125100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.125117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.131615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.131652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.131679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.138219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.138251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.138268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.144647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.144684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.144704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.151019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.151065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.151081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.157332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.157382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.157402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.164100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.164131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.164148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.170396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.170433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.170453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.176395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.176431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.176451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.182311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.182357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.182378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.188589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.188626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.188645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.193981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.194027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.194074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.199024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.199084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.199102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.206183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.206213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.206230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.212817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.212853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.212872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.219174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.219205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.219222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.225761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.225797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.225816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.233259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.233292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.233325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.239674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.239711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.239737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.246635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.246672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.246691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.254199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.254232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.254250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.261902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.261939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.261959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.269932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.269970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.269990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.278227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.278258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.278275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.286769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.286805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.286824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.294805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.294843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.294863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.301732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.301769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.301790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.307635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.307679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.307722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.313075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.313108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.313126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.318277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.318345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.318369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.322557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.322593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.322614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.328870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.328907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.328927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.335717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.335754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.335782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.343829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.343865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.343885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.351550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.351588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.351608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.358955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.358992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.359023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.366938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.366975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.366995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.691 [2024-10-17 16:54:27.374664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.691 [2024-10-17 16:54:27.374701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-10-17 16:54:27.374721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.382197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.382232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.382250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.389869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.389906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.389927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.397583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.397620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.397640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.405295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.405358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.405381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.413165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.413197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.413215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.421169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.421203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.421222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.429015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.429075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.429098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.436134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.436168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.436186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.440870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.440906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.440927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.448201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.448249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.448267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.456413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.456451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.456471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.464708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.464745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.464765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.471952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.471988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.472021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.478758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.478795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.478814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.484883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.484921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.484940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.490922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.490965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.490986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.496873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.496910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.496930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.502892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.502953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-10-17 16:54:27.502974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.951 [2024-10-17 16:54:27.507403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.951 [2024-10-17 16:54:27.507439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.507459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.512695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.512731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.512765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.517870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.517906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.517927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.522861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.522898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.522917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.527786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.527822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.527843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.532960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.532996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.533025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.537851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.537888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.537908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.542903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.542939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.542958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.548017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.548066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.548085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.553247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.553280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.553297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.558143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.558176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.558194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.564809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.564846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.564866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.571343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.571380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.571400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.578647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.578684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.578705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.585652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.585688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.585715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.591658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.591696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.591716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.595982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.596056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.596077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.601849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.601887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.601907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.607099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.607132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.607150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.612643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.612679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.612699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.617402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.617437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.617457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.623232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.623313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.623335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.628942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.628980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.629008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.633753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.633790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.633810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:13.952 [2024-10-17 16:54:27.639792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:13.952 [2024-10-17 16:54:27.639828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.952 [2024-10-17 16:54:27.639848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.645145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.645208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.645230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.650733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.650769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.650788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.654905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.654942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.654961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.662886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.662923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.662943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.669472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.669509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.669530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.675963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.676010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.676049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.681997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.682055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.682078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.688560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.688597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.688617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.694565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.694602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.694622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.701317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.701354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.701374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.709116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.709149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.709166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.715748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.715784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.715804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.723644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.723680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.723699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.730348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.730383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.730404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.736929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.736967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.736987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.742333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.742376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.742398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.748156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.748188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.748206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.753435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.753480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.753521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.758308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.758359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.758379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.763540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.763576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.763596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.769222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.769253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.769270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.775902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.775939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.775972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.780843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.780878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.780898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.787764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.787800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.787821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.796146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.796180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.796212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.803802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.803840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.803860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.812531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.812568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-10-17 16:54:27.812588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.212 [2024-10-17 16:54:27.820352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.212 [2024-10-17 16:54:27.820389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.213 [2024-10-17 16:54:27.820409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.213 [2024-10-17 16:54:27.828234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.213 [2024-10-17 16:54:27.828268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.213 [2024-10-17 16:54:27.828299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.213 [2024-10-17 16:54:27.836151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.213 [2024-10-17 16:54:27.836185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.213 [2024-10-17 16:54:27.836203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.213 [2024-10-17 16:54:27.844134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.213 [2024-10-17 16:54:27.844176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.213 [2024-10-17 16:54:27.844193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.213 [2024-10-17 16:54:27.852385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.213 [2024-10-17 16:54:27.852422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.213 [2024-10-17 16:54:27.852443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.213 [2024-10-17 16:54:27.860387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.213 [2024-10-17 16:54:27.860424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.213 [2024-10-17 16:54:27.860451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.213 [2024-10-17 16:54:27.868901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.213 [2024-10-17 16:54:27.868937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.213 [2024-10-17 16:54:27.868958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.213 [2024-10-17 16:54:27.877891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.213 [2024-10-17 16:54:27.877928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.213 [2024-10-17 16:54:27.877948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.213 [2024-10-17 16:54:27.886138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.213 [2024-10-17 16:54:27.886171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.213 [2024-10-17 16:54:27.886189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.213 [2024-10-17 16:54:27.893047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.213 [2024-10-17 16:54:27.893080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.213 [2024-10-17 16:54:27.893098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.213 [2024-10-17 16:54:27.897812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.213 [2024-10-17 16:54:27.897870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.213 [2024-10-17 16:54:27.897892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:27.905322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.905359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.905379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:27.913143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.913195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.913213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:27.921250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.921283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.921301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:27.929633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.929680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.929701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:27.938279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.938312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.938347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:27.943707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.943761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.943781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:27.950847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.950883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.950903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:27.958412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.958449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.958469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.473 4638.00 IOPS, 579.75 MiB/s [2024-10-17T14:54:28.163Z] [2024-10-17 16:54:27.968543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.968581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.968601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:27.976198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.976229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.976245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:27.982979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.983025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.983060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:27.990206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.990240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.990264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:27.997274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:27.997307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:27.997341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:28.004134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:28.004168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:28.004185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:28.010109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:28.010142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:28.010161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:28.015164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:28.015196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:28.015213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:28.021903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:28.021939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:28.021960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:28.028559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:28.028596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:28.028616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:28.035962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.473 [2024-10-17 16:54:28.035998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-10-17 16:54:28.036029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.473 [2024-10-17 16:54:28.042551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.042588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.042607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.048852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.048894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.048914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.055520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.055557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.055577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.060033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.060083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.060101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.065341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.065379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.065399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.072138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.072170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.072188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.078272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.078334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.078355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.082872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.082925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.082946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.088324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.088357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.088401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.094738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.094774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.094794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.101651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.101699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.101717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.106557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.106592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.106613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.113487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.113525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.113545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.121545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.121582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.121602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.128181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.128215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.128240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.134479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.134516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.134536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.140946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.140983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.141013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.147633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.147669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.147690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.154055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.154087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.154127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.474 [2024-10-17 16:54:28.159965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.474 [2024-10-17 16:54:28.160010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.474 [2024-10-17 16:54:28.160046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.165781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.165814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.165833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.171112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.171145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.171163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.175858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.175891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.175924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.181180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.181214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.181232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.186687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.186738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.186758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.191726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.191774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.191825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.197358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.197399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.197416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.204355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.204394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.204426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.211200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.211248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.211266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.218667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.218714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.218732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.226198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.226231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.226264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.232820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.232852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.232871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.238443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.238476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.238494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.243857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.243890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.243909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.249386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.747 [2024-10-17 16:54:28.249419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.747 [2024-10-17 16:54:28.249436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.747 [2024-10-17 16:54:28.254897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.254945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.254962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.260553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.260584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.260601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.266138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.266172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.266190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.271697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.271730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.271747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.277380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.277410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.277427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.283216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.283249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.283266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.287538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.287570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.287589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.292319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.292351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.292382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.297971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.298036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.298068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.303062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.303113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.303142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.308887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.308919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.308937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.315881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.315915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.315933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.324154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.324202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.324221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.331667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.331716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.331733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.337373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.337406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.337424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.342931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.342963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.342981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.348521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.348568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.348585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.354124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.354157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.354175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.359640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.359677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.359710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.365253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.365301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.365319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.370766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.370798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.370814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.376398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.376431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.376449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.382427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.382474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.382491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.389917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.389949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.389966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.397309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.397342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.397374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.404746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.404794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.404810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.413134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.413182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.413205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.748 [2024-10-17 16:54:28.421293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:14.748 [2024-10-17 16:54:28.421327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.748 [2024-10-17 16:54:28.421345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.055 [2024-10-17 16:54:28.429714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.055 [2024-10-17 16:54:28.429750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.055 [2024-10-17 16:54:28.429768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.055 [2024-10-17 16:54:28.438255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.438290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.438307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.446034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.446069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.446088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.453736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.453781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.453800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.461438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.461485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.461510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.469158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.469192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.469211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.476405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.476439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.476457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.483845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.483886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.483918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.491190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.491223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.491241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.497580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.497613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.497646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.503336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.503368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.503389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.508897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.508929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.508948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.514630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.514662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.514678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.520335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.520368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.520385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.526061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.526095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.526112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.531725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.531757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.531775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.537304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.537369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.537388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.542604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.542656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.542676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.546852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.546911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.546928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.552432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.552464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.552481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.557692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.557732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.557764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.562717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.562749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.562775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.567413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.567446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.567464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.572619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.572660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.572678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.577841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.577920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.577949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.583520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.583602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.583622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.588329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.588381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.588422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.594734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.594782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.594804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.056 [2024-10-17 16:54:28.602189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.056 [2024-10-17 16:54:28.602223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.056 [2024-10-17 16:54:28.602257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.609776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.609809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.609841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.617408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.617440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.617457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.625792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.625824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.625856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.634040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.634073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.634091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.641826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.641866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.641884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.649547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.649593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.649610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.656779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.656812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.656830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.664086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.664119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.664137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.670411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.670459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.670478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.676130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.676178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.676196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.682293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.682342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.682360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.688401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.688434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.688452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.694734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.694767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.694785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.700649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.700681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.700714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.707243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.707276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.707294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.714534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.714568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.714586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.721937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.721970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.721988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.728425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.728460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.728493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.057 [2024-10-17 16:54:28.734062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.057 [2024-10-17 16:54:28.734099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.057 [2024-10-17 16:54:28.734117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.324 [2024-10-17 16:54:28.739662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.324 [2024-10-17 16:54:28.739696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.324 [2024-10-17 16:54:28.739714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.324 [2024-10-17 16:54:28.743863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.324 [2024-10-17 16:54:28.743905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.324 [2024-10-17 16:54:28.743924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.324 [2024-10-17 16:54:28.748700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.324 [2024-10-17 16:54:28.748733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.324 [2024-10-17 16:54:28.748780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.324 [2024-10-17 16:54:28.753780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.324 [2024-10-17 16:54:28.753813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.324 [2024-10-17 16:54:28.753831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.324 [2024-10-17 16:54:28.758654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.324 [2024-10-17 16:54:28.758701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.324 [2024-10-17 16:54:28.758719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.324 [2024-10-17 16:54:28.764189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.324 [2024-10-17 16:54:28.764222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.324 [2024-10-17 16:54:28.764255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.324 [2024-10-17 16:54:28.769062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.324 [2024-10-17 16:54:28.769095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.324 [2024-10-17 16:54:28.769113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.774888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.774920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.774939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.781676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.781709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.781727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.785753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.785783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.785800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.791194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.791226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.791244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.797236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.797269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.797287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.802814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.802848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.802867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.806842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.806874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.806892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.812141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.812173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.812191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.817890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.817925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.817943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.822260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.822294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.822313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.826336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.826368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.826386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.832103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.832136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.832155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.837349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.837407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.837440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.841553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.841586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.841651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.847601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.847634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.847651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.853802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.853834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.853851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.860838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.860870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.860888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.866804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.866853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.866870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.872837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.872870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.872888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.878259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.878303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.878321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.881795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.881827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.881845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.887419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.887458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.887477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.892385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.892418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.892437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.897253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.897286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.897304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.902143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.902177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.902195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.907119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.907152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.907170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.913283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.913338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.325 [2024-10-17 16:54:28.913356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.325 [2024-10-17 16:54:28.918620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.325 [2024-10-17 16:54:28.918653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.326 [2024-10-17 16:54:28.918671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.326 [2024-10-17 16:54:28.924031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.326 [2024-10-17 16:54:28.924079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.326 [2024-10-17 16:54:28.924097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.326 [2024-10-17 16:54:28.930281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.326 [2024-10-17 16:54:28.930314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.326 [2024-10-17 16:54:28.930332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.326 [2024-10-17 16:54:28.936372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.326 [2024-10-17 16:54:28.936405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.326 [2024-10-17 16:54:28.936457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.326 [2024-10-17 16:54:28.941525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.326 [2024-10-17 16:54:28.941559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.326 [2024-10-17 16:54:28.941576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.326 [2024-10-17 16:54:28.947828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.326 [2024-10-17 16:54:28.947861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.326 [2024-10-17 16:54:28.947890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:15.326 [2024-10-17 16:54:28.955239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.326 [2024-10-17 16:54:28.955272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.326 [2024-10-17 16:54:28.955289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.326 [2024-10-17 16:54:28.961499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.326 [2024-10-17 16:54:28.961532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.326 [2024-10-17 16:54:28.961551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:15.326 [2024-10-17 16:54:28.966260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208a1e0) 00:26:15.326 [2024-10-17 16:54:28.966307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.326 [2024-10-17 16:54:28.966323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:15.326 4868.00 IOPS, 608.50 MiB/s 00:26:15.326 Latency(us) 00:26:15.326 [2024-10-17T14:54:29.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.326 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:15.326 nvme0n1 : 2.00 4867.49 608.44 0.00 0.00 3282.28 1025.52 10048.85 00:26:15.326 [2024-10-17T14:54:29.016Z] =================================================================================================================== 00:26:15.326 [2024-10-17T14:54:29.016Z] Total : 4867.49 608.44 0.00 0.00 3282.28 1025.52 10048.85 00:26:15.326 { 00:26:15.326 "results": [ 00:26:15.326 { 00:26:15.326 "job": "nvme0n1", 00:26:15.326 "core_mask": "0x2", 00:26:15.326 "workload": "randread", 00:26:15.326 "status": "finished", 00:26:15.326 "queue_depth": 16, 00:26:15.326 "io_size": 131072, 00:26:15.326 "runtime": 2.003495, 00:26:15.326 "iops": 4867.494054140389, 00:26:15.326 "mibps": 608.4367567675487, 00:26:15.326 "io_failed": 0, 00:26:15.326 "io_timeout": 0, 00:26:15.326 "avg_latency_us": 3282.277013945857, 00:26:15.326 "min_latency_us": 1025.517037037037, 00:26:15.326 "max_latency_us": 10048.853333333333 00:26:15.326 } 00:26:15.326 ], 00:26:15.326 "core_count": 1 00:26:15.326 } 00:26:15.326 16:54:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:15.326 16:54:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:15.326 16:54:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:15.326 16:54:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:15.326 | .driver_specific 00:26:15.326 | .nvme_error 00:26:15.326 | .status_code 00:26:15.326 | .command_transient_transport_error' 00:26:15.585 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 314 > 0 )) 00:26:15.585 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2465455 00:26:15.585 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2465455 ']' 00:26:15.585 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2465455 00:26:15.585 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:15.585 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:15.585 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2465455 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2465455' 00:26:15.843 killing process with pid 2465455 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2465455 00:26:15.843 Received shutdown signal, test time was about 2.000000 seconds 00:26:15.843 00:26:15.843 Latency(us) 00:26:15.843 [2024-10-17T14:54:29.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.843 [2024-10-17T14:54:29.533Z] =================================================================================================================== 00:26:15.843 [2024-10-17T14:54:29.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2465455 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2465871 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2465871 /var/tmp/bperf.sock 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2465871 ']' 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:15.843 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:15.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:15.844 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:15.844 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.104 [2024-10-17 16:54:29.575322] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:16.104 [2024-10-17 16:54:29.575402] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465871 ] 00:26:16.104 [2024-10-17 16:54:29.638750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.104 [2024-10-17 16:54:29.705497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.362 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:16.362 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:16.362 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:16.362 16:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:16.620 16:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:16.620 16:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.620 16:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.620 16:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.620 16:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.620 16:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.878 nvme0n1 00:26:16.878 16:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:16.878 16:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.878 16:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.878 16:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.878 16:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:16.878 16:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:17.138 Running I/O for 2 seconds... 00:26:17.138 [2024-10-17 16:54:30.624686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166f6458 00:26:17.138 [2024-10-17 16:54:30.625923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.625966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.637981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e95a0 00:26:17.138 [2024-10-17 16:54:30.638644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.638690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.653488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166fc560 00:26:17.138 [2024-10-17 16:54:30.655426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.655464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.667174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e1b48 00:26:17.138 [2024-10-17 16:54:30.669350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.669402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.676504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166f6890 00:26:17.138 [2024-10-17 16:54:30.677402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.677435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.690123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166fac10 00:26:17.138 [2024-10-17 16:54:30.691188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.691218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.703102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166fb048 00:26:17.138 [2024-10-17 16:54:30.704285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.704339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.716280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e88f8 00:26:17.138 [2024-10-17 16:54:30.717500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.717533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.728795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e23b8 00:26:17.138 [2024-10-17 16:54:30.729487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.729526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.744314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166eea00 00:26:17.138 [2024-10-17 16:54:30.746279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.746308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.757368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e38d0 00:26:17.138 [2024-10-17 16:54:30.759117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.759159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.765810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166fc128 00:26:17.138 [2024-10-17 16:54:30.766661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.766693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.778548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166ef270 00:26:17.138 [2024-10-17 16:54:30.779506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.779539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.793137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e8088 00:26:17.138 [2024-10-17 16:54:30.794650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.794684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.806155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166edd58 00:26:17.138 [2024-10-17 16:54:30.808159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.808190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.138 [2024-10-17 16:54:30.819327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e88f8 00:26:17.138 [2024-10-17 16:54:30.821248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.138 [2024-10-17 16:54:30.821277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.398 [2024-10-17 16:54:30.827912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166f9f68 00:26:17.398 [2024-10-17 16:54:30.828975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.398 [2024-10-17 16:54:30.829036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.398 [2024-10-17 16:54:30.843214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166f20d8 00:26:17.398 [2024-10-17 16:54:30.845076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.398 [2024-10-17 16:54:30.845105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.398 [2024-10-17 16:54:30.854822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e12d8 00:26:17.398 [2024-10-17 16:54:30.856552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.398 [2024-10-17 16:54:30.856587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.398 [2024-10-17 16:54:30.866121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e8088 00:26:17.398 [2024-10-17 16:54:30.867162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.398 [2024-10-17 16:54:30.867190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.398 [2024-10-17 16:54:30.880848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166ef270 00:26:17.398 [2024-10-17 16:54:30.882276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.398 [2024-10-17 16:54:30.882323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:30.893289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e0ea0 00:26:17.399 [2024-10-17 16:54:30.894695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:30.894727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:30.909084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166f0350 00:26:17.399 [2024-10-17 16:54:30.911183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:30.911228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:30.918278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e3498 00:26:17.399 [2024-10-17 16:54:30.919410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:30.919441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:30.933794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.399 [2024-10-17 16:54:30.934098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:30.934127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:30.948388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.399 [2024-10-17 16:54:30.948670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:30.948701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:30.962979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.399 [2024-10-17 16:54:30.963284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:30.963327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:30.977472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.399 [2024-10-17 16:54:30.977746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:30.977778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:30.991997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.399 [2024-10-17 16:54:30.992332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:30.992376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:31.006579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.399 [2024-10-17 16:54:31.006857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:31.006888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:31.021088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.399 [2024-10-17 16:54:31.021367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:31.021410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:31.035514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.399 [2024-10-17 16:54:31.035791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:31.035822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:31.049970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.399 [2024-10-17 16:54:31.050275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:31.050321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:31.064525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.399 [2024-10-17 16:54:31.064803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:31.064835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.399 [2024-10-17 16:54:31.078968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.399 [2024-10-17 16:54:31.079279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.399 [2024-10-17 16:54:31.079309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.093373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.093734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.093767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.107757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.108051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.108085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.122399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.122677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.122708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.136951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.137234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.137262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.151214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.151488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.151520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.165566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.165842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.165873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.179915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.180222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.180251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.194401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.194678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.194710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.208816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.209110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.209138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.223219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.223513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.223559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.237717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.238019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.238051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.252096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.252373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.252404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.266514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.266787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.266819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.280931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.281230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.281273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.295568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.295846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.295877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.309983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.310351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.310395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.324437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.324711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.324742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.660 [2024-10-17 16:54:31.338815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.660 [2024-10-17 16:54:31.339117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.660 [2024-10-17 16:54:31.339159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.921 [2024-10-17 16:54:31.353191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.921 [2024-10-17 16:54:31.353467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.921 [2024-10-17 16:54:31.353498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.921 [2024-10-17 16:54:31.367593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.921 [2024-10-17 16:54:31.367870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.921 [2024-10-17 16:54:31.367902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.921 [2024-10-17 16:54:31.381908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.921 [2024-10-17 16:54:31.382198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.921 [2024-10-17 16:54:31.382229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.921 [2024-10-17 16:54:31.396286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.921 [2024-10-17 16:54:31.396568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.921 [2024-10-17 16:54:31.396597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.921 [2024-10-17 16:54:31.410553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.921 [2024-10-17 16:54:31.410832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.921 [2024-10-17 16:54:31.410864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.921 [2024-10-17 16:54:31.424935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.921 [2024-10-17 16:54:31.425282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.921 [2024-10-17 16:54:31.425325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.921 [2024-10-17 16:54:31.439380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.922 [2024-10-17 16:54:31.439650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.922 [2024-10-17 16:54:31.439682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.922 [2024-10-17 16:54:31.453901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.922 [2024-10-17 16:54:31.454232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.922 [2024-10-17 16:54:31.454261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.922 [2024-10-17 16:54:31.468302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.922 [2024-10-17 16:54:31.468619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.922 [2024-10-17 16:54:31.468651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.922 [2024-10-17 16:54:31.482770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.922 [2024-10-17 16:54:31.483067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.922 [2024-10-17 16:54:31.483102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.922 [2024-10-17 16:54:31.497112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.922 [2024-10-17 16:54:31.497427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.922 [2024-10-17 16:54:31.497458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.922 [2024-10-17 16:54:31.511514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.922 [2024-10-17 16:54:31.511799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.922 [2024-10-17 16:54:31.511831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.922 [2024-10-17 16:54:31.525758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.922 [2024-10-17 16:54:31.526046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.922 [2024-10-17 16:54:31.526074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.922 [2024-10-17 16:54:31.540122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.922 [2024-10-17 16:54:31.540402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.922 [2024-10-17 16:54:31.540433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.922 [2024-10-17 16:54:31.554473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.922 [2024-10-17 16:54:31.554749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.922 [2024-10-17 16:54:31.554781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.922 [2024-10-17 16:54:31.568667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.922 [2024-10-17 16:54:31.568946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.922 [2024-10-17 16:54:31.568978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.922 [2024-10-17 16:54:31.583052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.922 [2024-10-17 16:54:31.583351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.922 [2024-10-17 16:54:31.583383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.922 [2024-10-17 16:54:31.597404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:17.922 [2024-10-17 16:54:31.597689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.922 [2024-10-17 16:54:31.597720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.181 18288.00 IOPS, 71.44 MiB/s [2024-10-17T14:54:31.871Z] [2024-10-17 16:54:31.612065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.181 [2024-10-17 16:54:31.612390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.181 [2024-10-17 16:54:31.612422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.181 [2024-10-17 16:54:31.626360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.181 [2024-10-17 16:54:31.626636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.181 [2024-10-17 16:54:31.626666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.181 [2024-10-17 16:54:31.640590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.181 [2024-10-17 16:54:31.640865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.181 [2024-10-17 16:54:31.640897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.181 [2024-10-17 16:54:31.654954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.181 [2024-10-17 16:54:31.655240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.181 [2024-10-17 16:54:31.655267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.181 [2024-10-17 16:54:31.669095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.181 [2024-10-17 16:54:31.669389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.181 [2024-10-17 16:54:31.669421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.181 [2024-10-17 16:54:31.683517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.181 [2024-10-17 16:54:31.683798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.181 [2024-10-17 16:54:31.683830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.181 [2024-10-17 16:54:31.697877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.181 [2024-10-17 16:54:31.698164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.181 [2024-10-17 16:54:31.698201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.182 [2024-10-17 16:54:31.712260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.182 [2024-10-17 16:54:31.712555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.182 [2024-10-17 16:54:31.712588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.182 [2024-10-17 16:54:31.726572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.182 [2024-10-17 16:54:31.726850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.182 [2024-10-17 16:54:31.726882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.182 [2024-10-17 16:54:31.740952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.182 [2024-10-17 16:54:31.741236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.182 [2024-10-17 16:54:31.741265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.182 [2024-10-17 16:54:31.755388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.182 [2024-10-17 16:54:31.755664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.182 [2024-10-17 16:54:31.755696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.182 [2024-10-17 16:54:31.769787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.182 [2024-10-17 16:54:31.770089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.182 [2024-10-17 16:54:31.770133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.182 [2024-10-17 16:54:31.784198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.182 [2024-10-17 16:54:31.784478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.182 [2024-10-17 16:54:31.784510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.182 [2024-10-17 16:54:31.798498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.182 [2024-10-17 16:54:31.798772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.182 [2024-10-17 16:54:31.798803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.182 [2024-10-17 16:54:31.812745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.182 [2024-10-17 16:54:31.813027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.182 [2024-10-17 16:54:31.813074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.182 [2024-10-17 16:54:31.827135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.182 [2024-10-17 16:54:31.827413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.182 [2024-10-17 16:54:31.827444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.182 [2024-10-17 16:54:31.841629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.182 [2024-10-17 16:54:31.841910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.182 [2024-10-17 16:54:31.841942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.182 [2024-10-17 16:54:31.856056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.182 [2024-10-17 16:54:31.856332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.182 [2024-10-17 16:54:31.856384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.182 [2024-10-17 16:54:31.870314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.182 [2024-10-17 16:54:31.870609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.182 [2024-10-17 16:54:31.870640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:31.884432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:31.884707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:31.884738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:31.898554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:31.898835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:31.898868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:31.912826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:31.913104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:31.913147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:31.926991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:31.927271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:31.927298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:31.941020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:31.941298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:31.941327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:31.955305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:31.955568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:31.955594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:31.969689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:31.969976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:31.970017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:31.983484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:31.983755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:31.983782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:31.997176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:31.997447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:31.997477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:32.011570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:32.011844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:32.011875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:32.026091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:32.026385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:32.026415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:32.040507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:32.040790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:32.040820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:32.055096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:32.055363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:32.055394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:32.069571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:32.069848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:32.069884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:32.084058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:32.084345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:32.084389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:32.098507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:32.098787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:32.098817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:32.112967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:32.113268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:32.113295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.441 [2024-10-17 16:54:32.127415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.441 [2024-10-17 16:54:32.127688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.441 [2024-10-17 16:54:32.127719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.141703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.141981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.142019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.155970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.156275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.156302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.170454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.170725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.170755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.184711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.184988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.185027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.199141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.199430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.199460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.213560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.213838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.213868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.227914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.228197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.228233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.242215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.242511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.242541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.256509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.256781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.256812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.270336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.270587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.270615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.284510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.284782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.284812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.299135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.299384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.299411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.313526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.313804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.313834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.327904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.328193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.328220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.342355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.342704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.342734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.356811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.357154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.357182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.371300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.371575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.371617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.700 [2024-10-17 16:54:32.385743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.700 [2024-10-17 16:54:32.385994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.700 [2024-10-17 16:54:32.386029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.959 [2024-10-17 16:54:32.399991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.959 [2024-10-17 16:54:32.400302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.959 [2024-10-17 16:54:32.400345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.959 [2024-10-17 16:54:32.414527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.959 [2024-10-17 16:54:32.414803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.959 [2024-10-17 16:54:32.414833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.959 [2024-10-17 16:54:32.429074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.959 [2024-10-17 16:54:32.429332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.959 [2024-10-17 16:54:32.429360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.959 [2024-10-17 16:54:32.443198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.959 [2024-10-17 16:54:32.443451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.959 [2024-10-17 16:54:32.443478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.959 [2024-10-17 16:54:32.457399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.959 [2024-10-17 16:54:32.457672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.959 [2024-10-17 16:54:32.457701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.959 [2024-10-17 16:54:32.471888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.959 [2024-10-17 16:54:32.472175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.959 [2024-10-17 16:54:32.472202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.959 [2024-10-17 16:54:32.486292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.959 [2024-10-17 16:54:32.486572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.959 [2024-10-17 16:54:32.486617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.959 [2024-10-17 16:54:32.500675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.959 [2024-10-17 16:54:32.500949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.959 [2024-10-17 16:54:32.500979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.959 [2024-10-17 16:54:32.514883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.959 [2024-10-17 16:54:32.515176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.960 [2024-10-17 16:54:32.515204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.960 [2024-10-17 16:54:32.529372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.960 [2024-10-17 16:54:32.529641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.960 [2024-10-17 16:54:32.529671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.960 [2024-10-17 16:54:32.543777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.960 [2024-10-17 16:54:32.544071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.960 [2024-10-17 16:54:32.544098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.960 [2024-10-17 16:54:32.558179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.960 [2024-10-17 16:54:32.558448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.960 [2024-10-17 16:54:32.558479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.960 [2024-10-17 16:54:32.572573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.960 [2024-10-17 16:54:32.572849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.960 [2024-10-17 16:54:32.572879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.960 [2024-10-17 16:54:32.587011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.960 [2024-10-17 16:54:32.587305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.960 [2024-10-17 16:54:32.587351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.960 [2024-10-17 16:54:32.601296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.960 [2024-10-17 16:54:32.601596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.960 [2024-10-17 16:54:32.601625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.960 18061.00 IOPS, 70.55 MiB/s [2024-10-17T14:54:32.650Z] [2024-10-17 16:54:32.615904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f942d0) with pdu=0x2000166e4578 00:26:18.960 [2024-10-17 16:54:32.616204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:18.960 [2024-10-17 16:54:32.616232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.960 00:26:18.960 Latency(us) 00:26:18.960 [2024-10-17T14:54:32.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.960 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:18.960 nvme0n1 : 2.01 18058.62 70.54 0.00 0.00 7071.29 2767.08 16602.45 00:26:18.960 [2024-10-17T14:54:32.650Z] =================================================================================================================== 00:26:18.960 [2024-10-17T14:54:32.650Z] Total : 18058.62 70.54 0.00 0.00 7071.29 2767.08 16602.45 00:26:18.960 { 00:26:18.960 "results": [ 00:26:18.960 { 00:26:18.960 "job": "nvme0n1", 00:26:18.960 "core_mask": "0x2", 00:26:18.960 "workload": "randwrite", 00:26:18.960 "status": "finished", 00:26:18.960 "queue_depth": 128, 00:26:18.960 "io_size": 4096, 00:26:18.960 "runtime": 2.009124, 00:26:18.960 "iops": 18058.61659111135, 00:26:18.960 "mibps": 70.54147105902871, 00:26:18.960 "io_failed": 0, 00:26:18.960 "io_timeout": 0, 00:26:18.960 "avg_latency_us": 7071.288867982696, 00:26:18.960 "min_latency_us": 2767.0755555555556, 00:26:18.960 "max_latency_us": 16602.453333333335 00:26:18.960 } 00:26:18.960 ], 00:26:18.960 "core_count": 1 00:26:18.960 } 00:26:18.960 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:18.960 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:18.960 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:18.960 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:18.960 | .driver_specific 00:26:18.960 | .nvme_error 00:26:18.960 | .status_code 00:26:18.960 | .command_transient_transport_error' 00:26:19.219 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:26:19.219 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2465871 00:26:19.219 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2465871 ']' 00:26:19.219 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2465871 00:26:19.219 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:19.219 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:19.219 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2465871 00:26:19.477 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:19.477 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:19.477 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2465871' 00:26:19.477 killing process with pid 2465871 00:26:19.477 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2465871 00:26:19.477 Received shutdown signal, test time was about 2.000000 seconds 00:26:19.477 00:26:19.477 Latency(us) 00:26:19.477 [2024-10-17T14:54:33.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.477 [2024-10-17T14:54:33.167Z] =================================================================================================================== 00:26:19.477 [2024-10-17T14:54:33.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:19.477 16:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2465871 00:26:19.477 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:19.477 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:19.477 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:19.477 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:19.477 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:19.477 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2466394 00:26:19.478 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:19.478 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2466394 /var/tmp/bperf.sock 00:26:19.478 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2466394 ']' 00:26:19.478 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:19.478 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:19.478 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:19.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:19.478 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:19.478 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:19.738 [2024-10-17 16:54:33.197068] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:19.738 [2024-10-17 16:54:33.197154] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466394 ] 00:26:19.738 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:19.738 Zero copy mechanism will not be used. 00:26:19.738 [2024-10-17 16:54:33.257538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.738 [2024-10-17 16:54:33.318700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.997 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.997 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:19.997 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:19.997 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.257 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:20.257 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.257 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.257 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.257 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.257 16:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.516 nvme0n1 00:26:20.516 16:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:20.516 16:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.516 16:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.516 16:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.516 16:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:20.516 16:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:20.777 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:20.777 Zero copy mechanism will not be used. 00:26:20.777 Running I/O for 2 seconds... 00:26:20.777 [2024-10-17 16:54:34.288092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.777 [2024-10-17 16:54:34.288427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.777 [2024-10-17 16:54:34.288466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.777 [2024-10-17 16:54:34.293593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.777 [2024-10-17 16:54:34.293884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.777 [2024-10-17 16:54:34.293914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.777 [2024-10-17 16:54:34.298805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.777 [2024-10-17 16:54:34.299124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.777 [2024-10-17 16:54:34.299154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.777 [2024-10-17 16:54:34.304015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.777 [2024-10-17 16:54:34.304326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.777 [2024-10-17 16:54:34.304355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.777 [2024-10-17 16:54:34.309201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.777 [2024-10-17 16:54:34.309500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.777 [2024-10-17 16:54:34.309543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.777 [2024-10-17 16:54:34.314763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.777 [2024-10-17 16:54:34.315113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.777 [2024-10-17 16:54:34.315143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.777 [2024-10-17 16:54:34.320615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.777 [2024-10-17 16:54:34.320923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.777 [2024-10-17 16:54:34.320951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.326381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.326673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.326701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.332385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.332682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.332710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.338130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.338418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.338447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.343781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.344108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.344137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.349754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.350075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.350105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.355424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.355718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.355745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.361167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.361468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.361496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.366899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.367195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.367224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.372533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.372830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.372859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.377924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.378217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.378247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.382963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.383260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.383289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.388488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.388779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.388806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.393770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.394063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.394091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.398874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.399274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.399303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.404878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.405204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.405248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.410659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.410970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.411025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.417951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.418276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.418315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.423681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.423993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.424029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.429166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.429464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.429492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.434461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.434828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.434856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.440516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.440790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.440818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.447076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.447395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.447423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.453335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.453631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.453658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.459799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.460122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.460150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.778 [2024-10-17 16:54:34.466249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:20.778 [2024-10-17 16:54:34.466551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.778 [2024-10-17 16:54:34.466579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.472285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.472583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.472612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.478383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.478656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.478685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.484755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.485066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.485094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.491246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.491543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.491571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.496895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.497191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.497219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.502429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.502738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.502765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.508420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.508732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.508760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.514214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.514516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.514543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.519434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.519730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.519758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.525031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.525318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.525361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.531294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.531598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.531626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.537545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.537834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.537862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.543806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.544086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.040 [2024-10-17 16:54:34.544115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.040 [2024-10-17 16:54:34.550190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.040 [2024-10-17 16:54:34.550486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.550515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.556532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.556807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.556835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.562056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.562359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.562386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.567311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.567608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.567636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.572457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.572745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.572778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.577724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.578049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.578078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.582914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.583209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.583237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.588684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.588993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.589045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.594520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.594823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.594850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.600792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.601113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.601142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.606780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.607115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.607144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.612406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.612706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.612733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.617501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.617800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.617827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.622754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.623072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.623100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.627895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.628241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.628270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.633064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.633367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.633393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.638208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.638496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.638539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.643362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.643675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.643702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.648734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.649055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.649083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.653794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.654085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.654112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.658798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.659140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.659168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.664259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.664594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.664632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.669905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.670240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.670267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.675637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.675952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.675982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.681274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.681602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.681632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.686992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.687397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.687428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.692796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.693132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.693160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.698390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.698707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.698738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.704444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.041 [2024-10-17 16:54:34.704764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.041 [2024-10-17 16:54:34.704794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.041 [2024-10-17 16:54:34.710441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.042 [2024-10-17 16:54:34.710760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.042 [2024-10-17 16:54:34.710792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.042 [2024-10-17 16:54:34.715998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.042 [2024-10-17 16:54:34.716347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.042 [2024-10-17 16:54:34.716378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.042 [2024-10-17 16:54:34.721607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.042 [2024-10-17 16:54:34.721924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.042 [2024-10-17 16:54:34.721955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.042 [2024-10-17 16:54:34.727245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.042 [2024-10-17 16:54:34.727571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.042 [2024-10-17 16:54:34.727602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.732902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.733230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.733258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.738557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.738884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.738915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.744373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.744692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.744723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.750204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.750528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.750560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.755740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.756079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.756107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.761439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.761757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.761788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.767094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.767421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.767452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.772724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.773059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.773087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.778354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.778696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.778726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.784012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.784342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.784368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.789728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.790068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.790096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.795350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.795668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.795699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.801016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.801328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.801359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.806771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.807103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.807132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.812385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.812710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.812747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.818092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.818415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.818446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.823780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.303 [2024-10-17 16:54:34.824115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.303 [2024-10-17 16:54:34.824143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.303 [2024-10-17 16:54:34.829600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.829919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.829950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.835904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.836219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.836247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.842152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.842482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.842515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.848248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.848582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.848613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.853889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.854224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.854253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.859449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.859766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.859797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.864939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.865260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.865288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.870519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.870844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.870875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.876152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.876483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.876515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.882312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.882633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.882664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.887951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.888272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.888318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.893617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.893935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.893966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.899192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.899542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.899574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.904835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.905175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.905202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.910495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.910813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.910844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.916695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.917021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.917065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.923105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.923432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.923464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.928645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.928961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.928992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.934162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.934500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.934531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.939825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.940165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.940193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.945539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.945864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.945895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.951210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.951533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.951564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.956719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.957041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.957083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.962291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.962627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.962664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.968394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.968721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.968753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.304 [2024-10-17 16:54:34.974079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.304 [2024-10-17 16:54:34.974437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.304 [2024-10-17 16:54:34.974469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.305 [2024-10-17 16:54:34.979656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.305 [2024-10-17 16:54:34.979972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.305 [2024-10-17 16:54:34.980011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.305 [2024-10-17 16:54:34.985290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.305 [2024-10-17 16:54:34.985612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.305 [2024-10-17 16:54:34.985643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.305 [2024-10-17 16:54:34.990848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.305 [2024-10-17 16:54:34.991184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.305 [2024-10-17 16:54:34.991213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:34.996332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:34.996634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:34.996665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.001900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.002235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.002263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.007568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.007886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.007917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.013306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.013648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.013679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.019666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.019983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.020022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.026105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.026478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.026510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.032434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.032753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.032785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.038569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.038886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.038917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.044909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.045224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.045251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.052054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.052469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.052502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.059614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.059927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.059960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.066841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.067191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.067226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.074518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.074870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.074902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.081602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.081914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.081947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.088471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.088789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.088821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.096129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.096455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.096488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.103099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.103410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.103443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.108926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.109236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.109265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.114845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.115184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.115213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.120772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.121102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.121131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.126317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.126681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.126714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.133406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.133742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.133774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.140463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.140794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.140826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.147557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.147910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.565 [2024-10-17 16:54:35.147943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.565 [2024-10-17 16:54:35.155012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.565 [2024-10-17 16:54:35.155349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.155382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.162624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.162934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.162968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.169094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.169397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.169431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.175630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.175945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.175978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.181370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.181682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.181716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.187148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.187518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.187551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.192821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.193142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.193172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.198304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.198615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.198648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.203930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.204281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.204324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.209747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.210079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.210108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.215448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.215743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.215776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.221882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.222187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.222232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.227727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.228063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.228101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.234184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.234506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.234546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.240527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.240840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.240873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.247210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.247531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.247565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.566 [2024-10-17 16:54:35.253506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.566 [2024-10-17 16:54:35.253804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.566 [2024-10-17 16:54:35.253837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.827 [2024-10-17 16:54:35.260362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.827 [2024-10-17 16:54:35.260673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.827 [2024-10-17 16:54:35.260705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.827 [2024-10-17 16:54:35.266605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.266952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.266984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.272399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.272716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.272748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.828 5231.00 IOPS, 653.88 MiB/s [2024-10-17T14:54:35.518Z] [2024-10-17 16:54:35.279572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.279919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.279951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.285478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.285789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.285822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.291187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.291511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.291542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.296991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.297330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.297377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.303589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.303909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.303942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.309907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.310233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.310272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.316702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.316997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.317064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.323282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.323635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.323668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.330100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.330453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.330485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.336485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.336844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.336876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.342085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.342400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.342434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.347722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.348085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.348115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.354106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.354493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.354535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.359815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.360134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.360165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.365358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.365682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.365713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.370921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.371257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.371286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.377194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.377582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.377614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.383791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.384132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.384162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.390425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.390756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.390800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.396567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.396881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.396920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.403184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.403495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.403528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.409838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.410156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.410185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.415577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.415890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.415922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.421554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.828 [2024-10-17 16:54:35.421898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.828 [2024-10-17 16:54:35.421930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.828 [2024-10-17 16:54:35.427431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.427741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.427773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.433665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.434011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.434056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.440821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.441172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.441200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.446565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.446875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.446908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.452852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.453174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.453202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.459378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.459723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.459757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.465973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.466315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.466360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.471945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.472280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.472308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.477623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.477935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.477967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.483567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.483879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.483912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.489220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.489541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.489575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.494840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.495179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.495208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.500650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.500946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.500985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.507708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.508072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.508101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.829 [2024-10-17 16:54:35.513859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:21.829 [2024-10-17 16:54:35.514175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.829 [2024-10-17 16:54:35.514205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.090 [2024-10-17 16:54:35.520203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.090 [2024-10-17 16:54:35.520583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.090 [2024-10-17 16:54:35.520615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.090 [2024-10-17 16:54:35.527137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.090 [2024-10-17 16:54:35.527448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.090 [2024-10-17 16:54:35.527480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.090 [2024-10-17 16:54:35.533872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.534219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.534249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.540552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.540900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.540932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.547543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.547838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.547871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.554117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.554488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.554522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.560323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.560656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.560689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.565973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.566310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.566339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.571568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.571879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.571912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.577210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.577527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.577559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.582857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.583176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.583207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.588520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.588832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.588865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.595211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.595534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.595566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.601930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.602276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.602304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.608749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.609109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.609140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.615746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.616088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.616118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.622419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.622719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.622749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.629031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.629441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.629470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.635246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.635626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.635670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.641801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.642103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.642132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.648527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.648812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.091 [2024-10-17 16:54:35.648840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.091 [2024-10-17 16:54:35.655291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.091 [2024-10-17 16:54:35.655670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.655699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.662512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.662853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.662897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.669073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.669366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.669403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.674965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.675290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.675319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.681248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.681336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.681362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.686685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.686995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.687033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.691884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.692177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.692207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.696972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.697266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.697295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.702035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.702323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.702352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.707028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.707326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.707356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.712132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.712452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.712482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.717271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.717559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.717588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.722843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.723163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.723192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.728461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.728784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.728813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.733621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.733917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.733962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.738697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.739076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.739105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.743862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.744151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.744181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.749037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.749321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.749350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.754888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.755165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.755196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.760123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.760406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.760436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.765416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.765733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.765762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.771228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.771512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.771544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.092 [2024-10-17 16:54:35.777033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.092 [2024-10-17 16:54:35.777324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.092 [2024-10-17 16:54:35.777354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.354 [2024-10-17 16:54:35.783067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.354 [2024-10-17 16:54:35.783350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.354 [2024-10-17 16:54:35.783380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.354 [2024-10-17 16:54:35.789141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.354 [2024-10-17 16:54:35.789517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.354 [2024-10-17 16:54:35.789547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.354 [2024-10-17 16:54:35.795243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.354 [2024-10-17 16:54:35.795608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.354 [2024-10-17 16:54:35.795638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.354 [2024-10-17 16:54:35.801685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.354 [2024-10-17 16:54:35.801970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.354 [2024-10-17 16:54:35.802026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.354 [2024-10-17 16:54:35.808534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.354 [2024-10-17 16:54:35.808810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.354 [2024-10-17 16:54:35.808838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.354 [2024-10-17 16:54:35.814829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.354 [2024-10-17 16:54:35.815072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.354 [2024-10-17 16:54:35.815109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.354 [2024-10-17 16:54:35.820212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.354 [2024-10-17 16:54:35.820463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.354 [2024-10-17 16:54:35.820493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.354 [2024-10-17 16:54:35.824879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.354 [2024-10-17 16:54:35.825122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.354 [2024-10-17 16:54:35.825153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.354 [2024-10-17 16:54:35.829539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.354 [2024-10-17 16:54:35.829789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.354 [2024-10-17 16:54:35.829819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.354 [2024-10-17 16:54:35.834246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.354 [2024-10-17 16:54:35.834492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.354 [2024-10-17 16:54:35.834522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.354 [2024-10-17 16:54:35.838925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.354 [2024-10-17 16:54:35.839165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.354 [2024-10-17 16:54:35.839195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.354 [2024-10-17 16:54:35.843578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.354 [2024-10-17 16:54:35.843824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.354 [2024-10-17 16:54:35.843854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.848230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.848480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.848510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.852939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.853179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.853210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.858024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.858268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.858298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.863372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.863625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.863654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.868507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.868759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.868788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.873311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.873559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.873588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.878420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.878670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.878699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.884477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.884740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.884770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.891134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.891447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.891476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.897269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.897545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.897575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.903338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.903594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.903629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.908714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.908963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.908992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.914451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.914731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.914761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.920282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.920518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.920548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.925335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.925678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.925708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.930209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.930445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.930485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.934940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.935185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.935215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.939658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.939902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.939931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.944997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.945274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.945319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.950998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.951300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.951344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.957019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.957297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.957327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.962947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.963219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.963249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.968553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.968805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.968836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.974107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.974393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.974424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.979710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.979980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.980016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.985931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.986173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.986200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.992156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.992521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.992552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:35.998514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:35.998773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.355 [2024-10-17 16:54:35.998804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.355 [2024-10-17 16:54:36.003886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.355 [2024-10-17 16:54:36.004126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.356 [2024-10-17 16:54:36.004155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.356 [2024-10-17 16:54:36.008600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.356 [2024-10-17 16:54:36.008837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.356 [2024-10-17 16:54:36.008866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.356 [2024-10-17 16:54:36.013315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.356 [2024-10-17 16:54:36.013565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.356 [2024-10-17 16:54:36.013595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.356 [2024-10-17 16:54:36.018307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.356 [2024-10-17 16:54:36.018530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.356 [2024-10-17 16:54:36.018560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.356 [2024-10-17 16:54:36.023210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.356 [2024-10-17 16:54:36.023433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.356 [2024-10-17 16:54:36.023462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.356 [2024-10-17 16:54:36.028122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.356 [2024-10-17 16:54:36.028345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.356 [2024-10-17 16:54:36.028375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.356 [2024-10-17 16:54:36.032975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.356 [2024-10-17 16:54:36.033206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.356 [2024-10-17 16:54:36.033236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.356 [2024-10-17 16:54:36.037945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.356 [2024-10-17 16:54:36.038174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.356 [2024-10-17 16:54:36.038204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.356 [2024-10-17 16:54:36.042880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.356 [2024-10-17 16:54:36.043112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.356 [2024-10-17 16:54:36.043147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.615 [2024-10-17 16:54:36.047874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.615 [2024-10-17 16:54:36.048106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.615 [2024-10-17 16:54:36.048136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.615 [2024-10-17 16:54:36.052913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.053145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.053184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.057981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.058215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.058243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.062810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.063070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.063098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.067539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.067762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.067790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.071963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.072200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.072231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.076314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.076523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.076551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.080540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.080747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.080774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.084826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.085049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.085077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.089342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.089549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.089577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.094396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.094689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.094719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.100234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.100533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.100564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.105719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.105998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.106037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.110857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.110990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.111025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.116045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.116199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.116227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.121684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.121794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.121822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.126972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.127132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.127161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.132222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.132344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.132371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.137388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.137544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.137572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.142581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.142731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.142758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.147668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.147836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.147864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.152750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.152943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.152971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.157933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.158116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.158143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.163380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.163561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.163588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.169057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.169218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.169246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.174152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.174292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.174326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.179378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.179513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.179541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.616 [2024-10-17 16:54:36.183957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.616 [2024-10-17 16:54:36.184144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.616 [2024-10-17 16:54:36.184172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.189163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.189352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.189380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.194672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.194777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.194804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.200293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.200492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.200518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.205668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.205840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.205867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.210780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.210921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.210949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.215563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.215679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.215707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.220517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.220688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.220715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.226114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.226257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.226285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.231216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.231384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.231412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.236402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.236549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.236577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.241550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.241763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.241791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.247578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.247725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.247754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.252556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.252668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.252696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.256927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.257078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.257106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.261376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.261496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.261523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.265816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.265940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.265968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.270152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.270241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.270269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.274536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.274699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.274726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.617 [2024-10-17 16:54:36.279508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f94610) with pdu=0x2000166fef90 00:26:22.617 [2024-10-17 16:54:36.281125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.617 [2024-10-17 16:54:36.281157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.617 5378.00 IOPS, 672.25 MiB/s 00:26:22.617 Latency(us) 00:26:22.617 [2024-10-17T14:54:36.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.617 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:22.617 nvme0n1 : 2.00 5375.39 671.92 0.00 0.00 2969.05 2051.03 11019.76 00:26:22.617 [2024-10-17T14:54:36.307Z] =================================================================================================================== 00:26:22.617 [2024-10-17T14:54:36.307Z] Total : 5375.39 671.92 0.00 0.00 2969.05 2051.03 11019.76 00:26:22.617 { 00:26:22.617 "results": [ 00:26:22.617 { 00:26:22.617 "job": "nvme0n1", 00:26:22.617 "core_mask": "0x2", 00:26:22.617 "workload": "randwrite", 00:26:22.617 "status": "finished", 00:26:22.617 "queue_depth": 16, 00:26:22.617 "io_size": 131072, 00:26:22.617 "runtime": 2.004507, 00:26:22.617 "iops": 5375.386566372679, 00:26:22.617 "mibps": 671.9233207965849, 00:26:22.617 "io_failed": 0, 00:26:22.617 "io_timeout": 0, 00:26:22.617 "avg_latency_us": 2969.0483053708, 00:26:22.617 "min_latency_us": 2051.034074074074, 00:26:22.617 "max_latency_us": 11019.757037037038 00:26:22.617 } 00:26:22.617 ], 00:26:22.617 "core_count": 1 00:26:22.617 } 00:26:22.617 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:22.617 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:22.617 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:22.617 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:22.617 | .driver_specific 00:26:22.617 | .nvme_error 00:26:22.617 | .status_code 00:26:22.617 | .command_transient_transport_error' 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 347 > 0 )) 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2466394 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2466394 ']' 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2466394 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2466394 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2466394' 00:26:23.184 killing process with pid 2466394 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2466394 00:26:23.184 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.184 00:26:23.184 Latency(us) 00:26:23.184 [2024-10-17T14:54:36.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.184 [2024-10-17T14:54:36.874Z] =================================================================================================================== 00:26:23.184 [2024-10-17T14:54:36.874Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2466394 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2464913 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2464913 ']' 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2464913 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:23.184 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2464913 00:26:23.443 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:23.443 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:23.443 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2464913' 00:26:23.443 killing process with pid 2464913 00:26:23.443 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2464913 00:26:23.443 16:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2464913 00:26:23.443 00:26:23.443 real 0m15.505s 00:26:23.443 user 0m29.914s 00:26:23.443 sys 0m4.504s 00:26:23.443 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:23.443 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.443 ************************************ 00:26:23.443 END TEST nvmf_digest_error 00:26:23.443 ************************************ 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:23.705 rmmod nvme_tcp 00:26:23.705 rmmod nvme_fabrics 00:26:23.705 rmmod nvme_keyring 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 2464913 ']' 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 2464913 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2464913 ']' 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2464913 00:26:23.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2464913) - No such process 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2464913 is not found' 00:26:23.705 Process with pid 2464913 is not found 00:26:23.705 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:23.706 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:23.706 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:23.706 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:23.706 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:26:23.706 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:23.706 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:26:23.706 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:23.706 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:23.706 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.706 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.706 16:54:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.608 16:54:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:25.608 00:26:25.608 real 0m35.484s 00:26:25.608 user 1m0.772s 00:26:25.608 sys 0m10.606s 00:26:25.608 16:54:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:25.608 16:54:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:25.608 ************************************ 00:26:25.608 END TEST nvmf_digest 00:26:25.608 ************************************ 00:26:25.608 16:54:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:25.608 16:54:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:25.608 16:54:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:25.608 16:54:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:25.608 16:54:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:25.608 16:54:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:25.608 16:54:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.866 ************************************ 00:26:25.866 START TEST nvmf_bdevperf 00:26:25.866 ************************************ 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:25.866 * Looking for test storage... 00:26:25.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:25.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.866 --rc genhtml_branch_coverage=1 00:26:25.866 --rc genhtml_function_coverage=1 00:26:25.866 --rc genhtml_legend=1 00:26:25.866 --rc geninfo_all_blocks=1 00:26:25.866 --rc geninfo_unexecuted_blocks=1 00:26:25.866 00:26:25.866 ' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:25.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.866 --rc genhtml_branch_coverage=1 00:26:25.866 --rc genhtml_function_coverage=1 00:26:25.866 --rc genhtml_legend=1 00:26:25.866 --rc geninfo_all_blocks=1 00:26:25.866 --rc geninfo_unexecuted_blocks=1 00:26:25.866 00:26:25.866 ' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:25.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.866 --rc genhtml_branch_coverage=1 00:26:25.866 --rc genhtml_function_coverage=1 00:26:25.866 --rc genhtml_legend=1 00:26:25.866 --rc geninfo_all_blocks=1 00:26:25.866 --rc geninfo_unexecuted_blocks=1 00:26:25.866 00:26:25.866 ' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:25.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.866 --rc genhtml_branch_coverage=1 00:26:25.866 --rc genhtml_function_coverage=1 00:26:25.866 --rc genhtml_legend=1 00:26:25.866 --rc geninfo_all_blocks=1 00:26:25.866 --rc geninfo_unexecuted_blocks=1 00:26:25.866 00:26:25.866 ' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:25.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.866 16:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:28.398 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:28.398 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:28.398 Found net devices under 0000:09:00.0: cvl_0_0 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:28.398 Found net devices under 0000:09:00.1: cvl_0_1 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:28.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:26:28.398 00:26:28.398 --- 10.0.0.2 ping statistics --- 00:26:28.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.398 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:26:28.398 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:26:28.398 00:26:28.398 --- 10.0.0.1 ping statistics --- 00:26:28.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.399 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2468758 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2468758 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2468758 ']' 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.399 [2024-10-17 16:54:41.661524] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:28.399 [2024-10-17 16:54:41.661613] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.399 [2024-10-17 16:54:41.732202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:28.399 [2024-10-17 16:54:41.797733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.399 [2024-10-17 16:54:41.797795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.399 [2024-10-17 16:54:41.797821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.399 [2024-10-17 16:54:41.797835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.399 [2024-10-17 16:54:41.797847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.399 [2024-10-17 16:54:41.799365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.399 [2024-10-17 16:54:41.799422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.399 [2024-10-17 16:54:41.799418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.399 [2024-10-17 16:54:41.952503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.399 Malloc0 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.399 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.399 [2024-10-17 16:54:42.007596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:28.399 { 00:26:28.399 "params": { 00:26:28.399 "name": "Nvme$subsystem", 00:26:28.399 "trtype": "$TEST_TRANSPORT", 00:26:28.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.399 "adrfam": "ipv4", 00:26:28.399 "trsvcid": "$NVMF_PORT", 00:26:28.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.399 "hdgst": ${hdgst:-false}, 00:26:28.399 "ddgst": ${ddgst:-false} 00:26:28.399 }, 00:26:28.399 "method": "bdev_nvme_attach_controller" 00:26:28.399 } 00:26:28.399 EOF 00:26:28.399 )") 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:26:28.399 16:54:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:28.399 "params": { 00:26:28.399 "name": "Nvme1", 00:26:28.399 "trtype": "tcp", 00:26:28.399 "traddr": "10.0.0.2", 00:26:28.399 "adrfam": "ipv4", 00:26:28.399 "trsvcid": "4420", 00:26:28.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:28.399 "hdgst": false, 00:26:28.399 "ddgst": false 00:26:28.399 }, 00:26:28.399 "method": "bdev_nvme_attach_controller" 00:26:28.399 }' 00:26:28.399 [2024-10-17 16:54:42.057071] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:28.399 [2024-10-17 16:54:42.057151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468780 ] 00:26:28.659 [2024-10-17 16:54:42.118393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.659 [2024-10-17 16:54:42.178093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.918 Running I/O for 1 seconds... 00:26:29.857 8509.00 IOPS, 33.24 MiB/s 00:26:29.857 Latency(us) 00:26:29.857 [2024-10-17T14:54:43.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.857 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:29.857 Verification LBA range: start 0x0 length 0x4000 00:26:29.857 Nvme1n1 : 1.01 8605.82 33.62 0.00 0.00 14791.19 1268.24 14175.19 00:26:29.857 [2024-10-17T14:54:43.547Z] =================================================================================================================== 00:26:29.857 [2024-10-17T14:54:43.547Z] Total : 8605.82 33.62 0.00 0.00 14791.19 1268.24 14175.19 00:26:30.115 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2469043 00:26:30.115 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:30.116 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:30.116 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:30.116 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:26:30.116 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:26:30.116 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:30.116 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:30.116 { 00:26:30.116 "params": { 00:26:30.116 "name": "Nvme$subsystem", 00:26:30.116 "trtype": "$TEST_TRANSPORT", 00:26:30.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.116 "adrfam": "ipv4", 00:26:30.116 "trsvcid": "$NVMF_PORT", 00:26:30.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.116 "hdgst": ${hdgst:-false}, 00:26:30.116 "ddgst": ${ddgst:-false} 00:26:30.116 }, 00:26:30.116 "method": "bdev_nvme_attach_controller" 00:26:30.116 } 00:26:30.116 EOF 00:26:30.116 )") 00:26:30.116 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:26:30.116 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:26:30.116 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:26:30.116 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:30.116 "params": { 00:26:30.116 "name": "Nvme1", 00:26:30.116 "trtype": "tcp", 00:26:30.116 "traddr": "10.0.0.2", 00:26:30.116 "adrfam": "ipv4", 00:26:30.116 "trsvcid": "4420", 00:26:30.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:30.116 "hdgst": false, 00:26:30.116 "ddgst": false 00:26:30.116 }, 00:26:30.116 "method": "bdev_nvme_attach_controller" 00:26:30.116 }' 00:26:30.116 [2024-10-17 16:54:43.725876] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:30.116 [2024-10-17 16:54:43.725970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469043 ] 00:26:30.116 [2024-10-17 16:54:43.783722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.374 [2024-10-17 16:54:43.841241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.633 Running I/O for 15 seconds... 00:26:32.505 8556.00 IOPS, 33.42 MiB/s [2024-10-17T14:54:46.766Z] 8598.00 IOPS, 33.59 MiB/s [2024-10-17T14:54:46.766Z] 16:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2468758 00:26:33.076 16:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:33.076 [2024-10-17 16:54:46.696884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.076 [2024-10-17 16:54:46.696952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.076 [2024-10-17 16:54:46.696983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.076 [2024-10-17 16:54:46.697012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.076 [2024-10-17 16:54:46.697035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.076 [2024-10-17 16:54:46.697053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.076 [2024-10-17 16:54:46.697086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.076 [2024-10-17 16:54:46.697102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.076 [2024-10-17 16:54:46.697118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.076 [2024-10-17 16:54:46.697134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.076 [2024-10-17 16:54:46.697152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.076 [2024-10-17 16:54:46.697166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.076 [2024-10-17 16:54:46.697182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.076 [2024-10-17 16:54:46.697195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.697981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.697998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.698025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.698070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.698085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.698100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.698114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.698129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.698143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.698163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.698178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.698193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.698207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.698222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.698236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.698251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.698265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.698297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.698313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.698330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.698345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.698362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.077 [2024-10-17 16:54:46.698377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.077 [2024-10-17 16:54:46.698394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.698973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.698993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.699052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.699083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.699112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.699142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.699171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.699201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.699230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.699259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.699309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.699342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.699374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.078 [2024-10-17 16:54:46.699406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.078 [2024-10-17 16:54:46.699444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.078 [2024-10-17 16:54:46.699477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.078 [2024-10-17 16:54:46.699511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.078 [2024-10-17 16:54:46.699543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.078 [2024-10-17 16:54:46.699576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.078 [2024-10-17 16:54:46.699592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.078 [2024-10-17 16:54:46.699608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.699624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.699640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.699657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.699674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.699691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.699706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.699724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.699746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.699764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.699779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.699796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.699812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.699828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.699848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.699866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.699881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.699898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.699914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.699931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.699946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.699963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.699978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.699995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.079 [2024-10-17 16:54:46.700738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.079 [2024-10-17 16:54:46.700772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.079 [2024-10-17 16:54:46.700806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.079 [2024-10-17 16:54:46.700839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.079 [2024-10-17 16:54:46.700872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.079 [2024-10-17 16:54:46.700906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.079 [2024-10-17 16:54:46.700940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.079 [2024-10-17 16:54:46.700957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.079 [2024-10-17 16:54:46.700973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.700990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.080 [2024-10-17 16:54:46.701014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.080 [2024-10-17 16:54:46.701064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.080 [2024-10-17 16:54:46.701094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.080 [2024-10-17 16:54:46.701124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.080 [2024-10-17 16:54:46.701158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.080 [2024-10-17 16:54:46.701188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.080 [2024-10-17 16:54:46.701219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.080 [2024-10-17 16:54:46.701249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.080 [2024-10-17 16:54:46.701294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cabe0 is same with the state(6) to be set 00:26:33.080 [2024-10-17 16:54:46.701331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.080 [2024-10-17 16:54:46.701345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.080 [2024-10-17 16:54:46.701359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42744 len:8 PRP1 0x0 PRP2 0x0 00:26:33.080 [2024-10-17 16:54:46.701374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701441] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15cabe0 was disconnected and freed. reset controller. 00:26:33.080 [2024-10-17 16:54:46.701527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.080 [2024-10-17 16:54:46.701553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.080 [2024-10-17 16:54:46.701586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.080 [2024-10-17 16:54:46.701617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.080 [2024-10-17 16:54:46.701647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.080 [2024-10-17 16:54:46.701661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.080 [2024-10-17 16:54:46.705437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.080 [2024-10-17 16:54:46.705480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.080 [2024-10-17 16:54:46.706123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-10-17 16:54:46.706153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.080 [2024-10-17 16:54:46.706170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.080 [2024-10-17 16:54:46.706413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.080 [2024-10-17 16:54:46.706655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.080 [2024-10-17 16:54:46.706678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.080 [2024-10-17 16:54:46.706697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.080 [2024-10-17 16:54:46.710265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.080 [2024-10-17 16:54:46.719710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.080 [2024-10-17 16:54:46.720113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-10-17 16:54:46.720143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.080 [2024-10-17 16:54:46.720161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.080 [2024-10-17 16:54:46.720398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.080 [2024-10-17 16:54:46.720654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.080 [2024-10-17 16:54:46.720678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.080 [2024-10-17 16:54:46.720694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.080 [2024-10-17 16:54:46.724255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.080 [2024-10-17 16:54:46.733710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.080 [2024-10-17 16:54:46.734106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-10-17 16:54:46.734139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.080 [2024-10-17 16:54:46.734158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.080 [2024-10-17 16:54:46.734396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.080 [2024-10-17 16:54:46.734639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.080 [2024-10-17 16:54:46.734664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.080 [2024-10-17 16:54:46.734679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.080 [2024-10-17 16:54:46.738242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.080 [2024-10-17 16:54:46.747691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.080 [2024-10-17 16:54:46.748091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-10-17 16:54:46.748125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.080 [2024-10-17 16:54:46.748145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.080 [2024-10-17 16:54:46.748382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.080 [2024-10-17 16:54:46.748631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.080 [2024-10-17 16:54:46.748656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.080 [2024-10-17 16:54:46.748672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.080 [2024-10-17 16:54:46.752238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.080 [2024-10-17 16:54:46.761696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.080 [2024-10-17 16:54:46.762089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-10-17 16:54:46.762122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.080 [2024-10-17 16:54:46.762142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.080 [2024-10-17 16:54:46.762380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.080 [2024-10-17 16:54:46.762623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.080 [2024-10-17 16:54:46.762649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.080 [2024-10-17 16:54:46.762665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.341 [2024-10-17 16:54:46.766230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.341 [2024-10-17 16:54:46.775679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.341 [2024-10-17 16:54:46.776052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.341 [2024-10-17 16:54:46.776081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.341 [2024-10-17 16:54:46.776098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.341 [2024-10-17 16:54:46.776320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.341 [2024-10-17 16:54:46.776576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.341 [2024-10-17 16:54:46.776601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.341 [2024-10-17 16:54:46.776618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.341 [2024-10-17 16:54:46.780179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.341 [2024-10-17 16:54:46.789658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.341 [2024-10-17 16:54:46.790057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.341 [2024-10-17 16:54:46.790089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.341 [2024-10-17 16:54:46.790108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.341 [2024-10-17 16:54:46.790345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.341 [2024-10-17 16:54:46.790588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.341 [2024-10-17 16:54:46.790613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.341 [2024-10-17 16:54:46.790629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.341 [2024-10-17 16:54:46.794126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.341 [2024-10-17 16:54:46.803682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.341 [2024-10-17 16:54:46.804070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.341 [2024-10-17 16:54:46.804100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.341 [2024-10-17 16:54:46.804117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.341 [2024-10-17 16:54:46.804361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.341 [2024-10-17 16:54:46.804555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.341 [2024-10-17 16:54:46.804575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.341 [2024-10-17 16:54:46.804588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.341 [2024-10-17 16:54:46.808100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.341 [2024-10-17 16:54:46.817489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.341 [2024-10-17 16:54:46.817888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.341 [2024-10-17 16:54:46.817921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.341 [2024-10-17 16:54:46.817940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.341 [2024-10-17 16:54:46.818218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.341 [2024-10-17 16:54:46.818466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.341 [2024-10-17 16:54:46.818492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.341 [2024-10-17 16:54:46.818508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.341 [2024-10-17 16:54:46.822045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.341 [2024-10-17 16:54:46.831396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.341 [2024-10-17 16:54:46.831760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.341 [2024-10-17 16:54:46.831792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.341 [2024-10-17 16:54:46.831811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.341 [2024-10-17 16:54:46.832061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.342 [2024-10-17 16:54:46.832304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.342 [2024-10-17 16:54:46.832328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.342 [2024-10-17 16:54:46.832344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.342 [2024-10-17 16:54:46.835913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.342 [2024-10-17 16:54:46.845361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.342 [2024-10-17 16:54:46.845750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.342 [2024-10-17 16:54:46.845782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.342 [2024-10-17 16:54:46.845806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.342 [2024-10-17 16:54:46.846058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.342 [2024-10-17 16:54:46.846301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.342 [2024-10-17 16:54:46.846325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.342 [2024-10-17 16:54:46.846341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.342 [2024-10-17 16:54:46.849893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.342 [2024-10-17 16:54:46.859342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.342 [2024-10-17 16:54:46.859743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.342 [2024-10-17 16:54:46.859775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.342 [2024-10-17 16:54:46.859793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.342 [2024-10-17 16:54:46.860041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.342 [2024-10-17 16:54:46.860284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.342 [2024-10-17 16:54:46.860309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.342 [2024-10-17 16:54:46.860325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.342 [2024-10-17 16:54:46.863870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.342 [2024-10-17 16:54:46.873309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.342 [2024-10-17 16:54:46.873697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.342 [2024-10-17 16:54:46.873729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.342 [2024-10-17 16:54:46.873747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.342 [2024-10-17 16:54:46.873985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.342 [2024-10-17 16:54:46.874240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.342 [2024-10-17 16:54:46.874265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.342 [2024-10-17 16:54:46.874281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.342 [2024-10-17 16:54:46.877831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.342 [2024-10-17 16:54:46.887306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.342 [2024-10-17 16:54:46.887714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.342 [2024-10-17 16:54:46.887742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.342 [2024-10-17 16:54:46.887758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.342 [2024-10-17 16:54:46.887992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.342 [2024-10-17 16:54:46.888268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.342 [2024-10-17 16:54:46.888308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.342 [2024-10-17 16:54:46.888325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.342 [2024-10-17 16:54:46.891875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.342 [2024-10-17 16:54:46.901327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.342 [2024-10-17 16:54:46.901702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.342 [2024-10-17 16:54:46.901735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.342 [2024-10-17 16:54:46.901755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.342 [2024-10-17 16:54:46.901993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.342 [2024-10-17 16:54:46.902250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.342 [2024-10-17 16:54:46.902275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.342 [2024-10-17 16:54:46.902291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.342 [2024-10-17 16:54:46.905858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.342 [2024-10-17 16:54:46.915310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.342 [2024-10-17 16:54:46.915700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.342 [2024-10-17 16:54:46.915732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.342 [2024-10-17 16:54:46.915750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.342 [2024-10-17 16:54:46.915988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.342 [2024-10-17 16:54:46.916243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.342 [2024-10-17 16:54:46.916267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.342 [2024-10-17 16:54:46.916283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.342 [2024-10-17 16:54:46.919838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.342 [2024-10-17 16:54:46.929290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.342 [2024-10-17 16:54:46.929650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.342 [2024-10-17 16:54:46.929696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.342 [2024-10-17 16:54:46.929712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.342 [2024-10-17 16:54:46.929933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.342 [2024-10-17 16:54:46.930204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.342 [2024-10-17 16:54:46.930230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.342 [2024-10-17 16:54:46.930246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.342 [2024-10-17 16:54:46.933800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.342 [2024-10-17 16:54:46.943252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.342 [2024-10-17 16:54:46.943613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.342 [2024-10-17 16:54:46.943645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.342 [2024-10-17 16:54:46.943664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.342 [2024-10-17 16:54:46.943901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.342 [2024-10-17 16:54:46.944157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.342 [2024-10-17 16:54:46.944183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.342 [2024-10-17 16:54:46.944199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.342 [2024-10-17 16:54:46.947750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.342 [2024-10-17 16:54:46.957206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.342 [2024-10-17 16:54:46.957590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.342 [2024-10-17 16:54:46.957622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.342 [2024-10-17 16:54:46.957641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.342 [2024-10-17 16:54:46.957878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.342 [2024-10-17 16:54:46.958131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.342 [2024-10-17 16:54:46.958157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.342 [2024-10-17 16:54:46.958173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.342 [2024-10-17 16:54:46.961722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.342 [2024-10-17 16:54:46.971166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.342 [2024-10-17 16:54:46.971536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.342 [2024-10-17 16:54:46.971567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.342 [2024-10-17 16:54:46.971586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.342 [2024-10-17 16:54:46.971823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.342 [2024-10-17 16:54:46.972076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.342 [2024-10-17 16:54:46.972103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.342 [2024-10-17 16:54:46.972119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.342 [2024-10-17 16:54:46.975669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.342 [2024-10-17 16:54:46.985146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.342 [2024-10-17 16:54:46.985548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.342 [2024-10-17 16:54:46.985580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.343 [2024-10-17 16:54:46.985605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.343 [2024-10-17 16:54:46.985844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.343 [2024-10-17 16:54:46.986101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.343 [2024-10-17 16:54:46.986128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.343 [2024-10-17 16:54:46.986144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.343 [2024-10-17 16:54:46.989712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.343 [2024-10-17 16:54:46.999184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.343 [2024-10-17 16:54:46.999581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.343 [2024-10-17 16:54:46.999613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.343 [2024-10-17 16:54:46.999632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.343 [2024-10-17 16:54:46.999869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.343 [2024-10-17 16:54:47.000122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.343 [2024-10-17 16:54:47.000148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.343 [2024-10-17 16:54:47.000164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.343 [2024-10-17 16:54:47.003716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.343 [2024-10-17 16:54:47.013186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.343 [2024-10-17 16:54:47.013576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.343 [2024-10-17 16:54:47.013609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.343 [2024-10-17 16:54:47.013628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.343 [2024-10-17 16:54:47.013866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.343 [2024-10-17 16:54:47.014120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.343 [2024-10-17 16:54:47.014146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.343 [2024-10-17 16:54:47.014162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.343 [2024-10-17 16:54:47.017710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.343 [2024-10-17 16:54:47.027159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.343 [2024-10-17 16:54:47.027520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.343 [2024-10-17 16:54:47.027553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.343 [2024-10-17 16:54:47.027572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.343 [2024-10-17 16:54:47.027810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.343 [2024-10-17 16:54:47.028065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.343 [2024-10-17 16:54:47.028096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.343 [2024-10-17 16:54:47.028113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.604 [2024-10-17 16:54:47.031664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.604 [2024-10-17 16:54:47.041123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.604 [2024-10-17 16:54:47.041487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.604 [2024-10-17 16:54:47.041520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.604 [2024-10-17 16:54:47.041539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.604 [2024-10-17 16:54:47.041778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.604 [2024-10-17 16:54:47.042033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.604 [2024-10-17 16:54:47.042063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.604 [2024-10-17 16:54:47.042079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.604 [2024-10-17 16:54:47.045625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.604 [2024-10-17 16:54:47.055070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.604 [2024-10-17 16:54:47.055433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.604 [2024-10-17 16:54:47.055466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.604 [2024-10-17 16:54:47.055485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.604 [2024-10-17 16:54:47.055723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.604 [2024-10-17 16:54:47.055966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.604 [2024-10-17 16:54:47.055991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.604 [2024-10-17 16:54:47.056025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.604 [2024-10-17 16:54:47.059577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.604 [2024-10-17 16:54:47.069017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.604 [2024-10-17 16:54:47.069381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.604 [2024-10-17 16:54:47.069413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.604 [2024-10-17 16:54:47.069432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.604 [2024-10-17 16:54:47.069670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.604 [2024-10-17 16:54:47.069913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.604 [2024-10-17 16:54:47.069938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.604 [2024-10-17 16:54:47.069955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.604 [2024-10-17 16:54:47.073516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.604 [2024-10-17 16:54:47.082951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.604 [2024-10-17 16:54:47.083335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.604 [2024-10-17 16:54:47.083368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.604 [2024-10-17 16:54:47.083386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.604 [2024-10-17 16:54:47.083623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.604 [2024-10-17 16:54:47.083865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.604 [2024-10-17 16:54:47.083890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.604 [2024-10-17 16:54:47.083906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.604 [2024-10-17 16:54:47.087485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.604 [2024-10-17 16:54:47.096913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.604 [2024-10-17 16:54:47.097316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.604 [2024-10-17 16:54:47.097348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.604 [2024-10-17 16:54:47.097367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.604 [2024-10-17 16:54:47.097605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.604 [2024-10-17 16:54:47.097847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.604 [2024-10-17 16:54:47.097872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.604 [2024-10-17 16:54:47.097889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.604 [2024-10-17 16:54:47.101450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.604 [2024-10-17 16:54:47.110884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.604 [2024-10-17 16:54:47.111258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.604 [2024-10-17 16:54:47.111291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.604 [2024-10-17 16:54:47.111309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.604 [2024-10-17 16:54:47.111546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.604 [2024-10-17 16:54:47.111787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.604 [2024-10-17 16:54:47.111813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.604 [2024-10-17 16:54:47.111829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.604 [2024-10-17 16:54:47.115391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.604 [2024-10-17 16:54:47.124822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.604 [2024-10-17 16:54:47.125182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.604 [2024-10-17 16:54:47.125216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.604 [2024-10-17 16:54:47.125234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.604 [2024-10-17 16:54:47.125478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.604 [2024-10-17 16:54:47.125719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.604 [2024-10-17 16:54:47.125745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.605 [2024-10-17 16:54:47.125761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.605 [2024-10-17 16:54:47.129326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.605 [2024-10-17 16:54:47.138758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.605 [2024-10-17 16:54:47.139126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.605 [2024-10-17 16:54:47.139158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.605 [2024-10-17 16:54:47.139177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.605 [2024-10-17 16:54:47.139415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.605 [2024-10-17 16:54:47.139655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.605 [2024-10-17 16:54:47.139680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.605 [2024-10-17 16:54:47.139697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.605 [2024-10-17 16:54:47.143259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.605 [2024-10-17 16:54:47.152682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.605 [2024-10-17 16:54:47.153073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.605 [2024-10-17 16:54:47.153105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.605 [2024-10-17 16:54:47.153124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.605 [2024-10-17 16:54:47.153362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.605 [2024-10-17 16:54:47.153603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.605 [2024-10-17 16:54:47.153629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.605 [2024-10-17 16:54:47.153645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.605 [2024-10-17 16:54:47.157221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.605 [2024-10-17 16:54:47.166654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.605 [2024-10-17 16:54:47.167052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.605 [2024-10-17 16:54:47.167084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.605 [2024-10-17 16:54:47.167103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.605 [2024-10-17 16:54:47.167341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.605 [2024-10-17 16:54:47.167583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.605 [2024-10-17 16:54:47.167608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.605 [2024-10-17 16:54:47.167630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.605 [2024-10-17 16:54:47.171193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.605 [2024-10-17 16:54:47.180622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.605 [2024-10-17 16:54:47.181015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.605 [2024-10-17 16:54:47.181048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.605 [2024-10-17 16:54:47.181067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.605 [2024-10-17 16:54:47.181304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.605 [2024-10-17 16:54:47.181545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.605 [2024-10-17 16:54:47.181571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.605 [2024-10-17 16:54:47.181587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.605 [2024-10-17 16:54:47.185146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.605 7215.67 IOPS, 28.19 MiB/s [2024-10-17T14:54:47.295Z] [2024-10-17 16:54:47.194647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.605 [2024-10-17 16:54:47.195049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.605 [2024-10-17 16:54:47.195082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.605 [2024-10-17 16:54:47.195100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.605 [2024-10-17 16:54:47.195338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.605 [2024-10-17 16:54:47.195579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.605 [2024-10-17 16:54:47.195605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.605 [2024-10-17 16:54:47.195621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.605 [2024-10-17 16:54:47.199191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.605 [2024-10-17 16:54:47.208633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.605 [2024-10-17 16:54:47.208999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.605 [2024-10-17 16:54:47.209039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.605 [2024-10-17 16:54:47.209058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.605 [2024-10-17 16:54:47.209296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.605 [2024-10-17 16:54:47.209537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.605 [2024-10-17 16:54:47.209562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.605 [2024-10-17 16:54:47.209578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.605 [2024-10-17 16:54:47.213138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.605 [2024-10-17 16:54:47.222578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.605 [2024-10-17 16:54:47.222975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.605 [2024-10-17 16:54:47.223017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.605 [2024-10-17 16:54:47.223039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.605 [2024-10-17 16:54:47.223276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.605 [2024-10-17 16:54:47.223519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.605 [2024-10-17 16:54:47.223545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.605 [2024-10-17 16:54:47.223561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.605 [2024-10-17 16:54:47.227122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.605 [2024-10-17 16:54:47.236565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.605 [2024-10-17 16:54:47.236964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.605 [2024-10-17 16:54:47.236997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.605 [2024-10-17 16:54:47.237031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.605 [2024-10-17 16:54:47.237270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.605 [2024-10-17 16:54:47.237513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.605 [2024-10-17 16:54:47.237539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.605 [2024-10-17 16:54:47.237554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.605 [2024-10-17 16:54:47.241111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.605 [2024-10-17 16:54:47.250538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.605 [2024-10-17 16:54:47.250902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.605 [2024-10-17 16:54:47.250935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.605 [2024-10-17 16:54:47.250953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.605 [2024-10-17 16:54:47.251204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.605 [2024-10-17 16:54:47.251447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.605 [2024-10-17 16:54:47.251472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.605 [2024-10-17 16:54:47.251488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.605 [2024-10-17 16:54:47.255043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.605 [2024-10-17 16:54:47.264479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.605 [2024-10-17 16:54:47.264880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.605 [2024-10-17 16:54:47.264913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.605 [2024-10-17 16:54:47.264933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.605 [2024-10-17 16:54:47.265185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.605 [2024-10-17 16:54:47.265434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.605 [2024-10-17 16:54:47.265460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.605 [2024-10-17 16:54:47.265477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.605 [2024-10-17 16:54:47.269032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.605 [2024-10-17 16:54:47.278454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.605 [2024-10-17 16:54:47.278839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.605 [2024-10-17 16:54:47.278871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.606 [2024-10-17 16:54:47.278889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.606 [2024-10-17 16:54:47.279138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.606 [2024-10-17 16:54:47.279380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.606 [2024-10-17 16:54:47.279405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.606 [2024-10-17 16:54:47.279422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.606 [2024-10-17 16:54:47.282969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.606 [2024-10-17 16:54:47.292420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.606 [2024-10-17 16:54:47.292805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.606 [2024-10-17 16:54:47.292836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.606 [2024-10-17 16:54:47.292855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.606 [2024-10-17 16:54:47.293106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.867 [2024-10-17 16:54:47.293348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.867 [2024-10-17 16:54:47.293375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.867 [2024-10-17 16:54:47.293391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.867 [2024-10-17 16:54:47.296936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.867 [2024-10-17 16:54:47.306399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.867 [2024-10-17 16:54:47.306742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.867 [2024-10-17 16:54:47.306775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.867 [2024-10-17 16:54:47.306794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.867 [2024-10-17 16:54:47.307044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.867 [2024-10-17 16:54:47.307286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.867 [2024-10-17 16:54:47.307312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.867 [2024-10-17 16:54:47.307328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.867 [2024-10-17 16:54:47.310879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.867 [2024-10-17 16:54:47.320317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.867 [2024-10-17 16:54:47.320722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.867 [2024-10-17 16:54:47.320756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.867 [2024-10-17 16:54:47.320776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.867 [2024-10-17 16:54:47.321026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.867 [2024-10-17 16:54:47.321275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.867 [2024-10-17 16:54:47.321300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.867 [2024-10-17 16:54:47.321316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.867 [2024-10-17 16:54:47.324875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.867 [2024-10-17 16:54:47.334315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.867 [2024-10-17 16:54:47.334704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.867 [2024-10-17 16:54:47.334736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.867 [2024-10-17 16:54:47.334755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.867 [2024-10-17 16:54:47.334993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.867 [2024-10-17 16:54:47.335245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.867 [2024-10-17 16:54:47.335271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.867 [2024-10-17 16:54:47.335287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.867 [2024-10-17 16:54:47.338835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.867 [2024-10-17 16:54:47.348270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.867 [2024-10-17 16:54:47.348658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.867 [2024-10-17 16:54:47.348690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.867 [2024-10-17 16:54:47.348708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.867 [2024-10-17 16:54:47.348945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.867 [2024-10-17 16:54:47.349199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.867 [2024-10-17 16:54:47.349225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.867 [2024-10-17 16:54:47.349242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.867 [2024-10-17 16:54:47.352788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.867 [2024-10-17 16:54:47.362230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.867 [2024-10-17 16:54:47.362622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.867 [2024-10-17 16:54:47.362655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.867 [2024-10-17 16:54:47.362679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.867 [2024-10-17 16:54:47.362918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.867 [2024-10-17 16:54:47.363171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.867 [2024-10-17 16:54:47.363197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.867 [2024-10-17 16:54:47.363214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.867 [2024-10-17 16:54:47.366802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.867 [2024-10-17 16:54:47.376236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.867 [2024-10-17 16:54:47.376604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.867 [2024-10-17 16:54:47.376636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.867 [2024-10-17 16:54:47.376655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.867 [2024-10-17 16:54:47.376892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.867 [2024-10-17 16:54:47.377146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.867 [2024-10-17 16:54:47.377173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.868 [2024-10-17 16:54:47.377188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.868 [2024-10-17 16:54:47.380734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.868 [2024-10-17 16:54:47.390185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.868 [2024-10-17 16:54:47.390574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.868 [2024-10-17 16:54:47.390607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.868 [2024-10-17 16:54:47.390626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.868 [2024-10-17 16:54:47.390864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.868 [2024-10-17 16:54:47.391118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.868 [2024-10-17 16:54:47.391145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.868 [2024-10-17 16:54:47.391161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.868 [2024-10-17 16:54:47.394709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.868 [2024-10-17 16:54:47.404139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.868 [2024-10-17 16:54:47.404553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.868 [2024-10-17 16:54:47.404585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.868 [2024-10-17 16:54:47.404604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.868 [2024-10-17 16:54:47.404841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.868 [2024-10-17 16:54:47.405106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.868 [2024-10-17 16:54:47.405133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.868 [2024-10-17 16:54:47.405149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.868 [2024-10-17 16:54:47.408697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.868 [2024-10-17 16:54:47.418133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.868 [2024-10-17 16:54:47.418497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.868 [2024-10-17 16:54:47.418529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.868 [2024-10-17 16:54:47.418548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.868 [2024-10-17 16:54:47.418785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.868 [2024-10-17 16:54:47.419037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.868 [2024-10-17 16:54:47.419064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.868 [2024-10-17 16:54:47.419080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.868 [2024-10-17 16:54:47.422624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.868 [2024-10-17 16:54:47.432134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.868 [2024-10-17 16:54:47.432532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.868 [2024-10-17 16:54:47.432565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.868 [2024-10-17 16:54:47.432584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.868 [2024-10-17 16:54:47.432823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.868 [2024-10-17 16:54:47.433084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.868 [2024-10-17 16:54:47.433112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.868 [2024-10-17 16:54:47.433129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.868 [2024-10-17 16:54:47.436674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.868 [2024-10-17 16:54:47.446107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.868 [2024-10-17 16:54:47.446497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.868 [2024-10-17 16:54:47.446529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.868 [2024-10-17 16:54:47.446548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.868 [2024-10-17 16:54:47.446785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.868 [2024-10-17 16:54:47.447041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.868 [2024-10-17 16:54:47.447067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.868 [2024-10-17 16:54:47.447082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.868 [2024-10-17 16:54:47.450627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.868 [2024-10-17 16:54:47.460073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.868 [2024-10-17 16:54:47.460464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.868 [2024-10-17 16:54:47.460496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.868 [2024-10-17 16:54:47.460514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.868 [2024-10-17 16:54:47.460752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.868 [2024-10-17 16:54:47.460993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.868 [2024-10-17 16:54:47.461042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.868 [2024-10-17 16:54:47.461058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.868 [2024-10-17 16:54:47.464612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.868 [2024-10-17 16:54:47.474049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.868 [2024-10-17 16:54:47.474443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.868 [2024-10-17 16:54:47.474475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.868 [2024-10-17 16:54:47.474494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.868 [2024-10-17 16:54:47.474732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.868 [2024-10-17 16:54:47.474973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.868 [2024-10-17 16:54:47.474999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.868 [2024-10-17 16:54:47.475026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.868 [2024-10-17 16:54:47.478585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.868 [2024-10-17 16:54:47.488021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.868 [2024-10-17 16:54:47.488396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.868 [2024-10-17 16:54:47.488428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.868 [2024-10-17 16:54:47.488447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.868 [2024-10-17 16:54:47.488684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.868 [2024-10-17 16:54:47.488937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.868 [2024-10-17 16:54:47.488964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.868 [2024-10-17 16:54:47.488980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.868 [2024-10-17 16:54:47.492534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.868 [2024-10-17 16:54:47.501963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.868 [2024-10-17 16:54:47.502369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.868 [2024-10-17 16:54:47.502402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.868 [2024-10-17 16:54:47.502426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.868 [2024-10-17 16:54:47.502665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.868 [2024-10-17 16:54:47.502906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.868 [2024-10-17 16:54:47.502931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.868 [2024-10-17 16:54:47.502947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.868 [2024-10-17 16:54:47.506503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.868 [2024-10-17 16:54:47.515929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.868 [2024-10-17 16:54:47.516307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.868 [2024-10-17 16:54:47.516340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.869 [2024-10-17 16:54:47.516358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.869 [2024-10-17 16:54:47.516596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.869 [2024-10-17 16:54:47.516837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.869 [2024-10-17 16:54:47.516863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.869 [2024-10-17 16:54:47.516879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.869 [2024-10-17 16:54:47.520434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.869 [2024-10-17 16:54:47.529859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.869 [2024-10-17 16:54:47.530235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.869 [2024-10-17 16:54:47.530267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.869 [2024-10-17 16:54:47.530286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.869 [2024-10-17 16:54:47.530523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.869 [2024-10-17 16:54:47.530764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.869 [2024-10-17 16:54:47.530789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.869 [2024-10-17 16:54:47.530805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.869 [2024-10-17 16:54:47.534366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.869 [2024-10-17 16:54:47.543791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.869 [2024-10-17 16:54:47.544193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.869 [2024-10-17 16:54:47.544225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:33.869 [2024-10-17 16:54:47.544244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:33.869 [2024-10-17 16:54:47.544480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:33.869 [2024-10-17 16:54:47.544721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.869 [2024-10-17 16:54:47.544752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.869 [2024-10-17 16:54:47.544769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.869 [2024-10-17 16:54:47.548329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.129 [2024-10-17 16:54:47.557778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.129 [2024-10-17 16:54:47.558155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.129 [2024-10-17 16:54:47.558188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.129 [2024-10-17 16:54:47.558207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.129 [2024-10-17 16:54:47.558444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.129 [2024-10-17 16:54:47.558685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.129 [2024-10-17 16:54:47.558710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.129 [2024-10-17 16:54:47.558727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.129 [2024-10-17 16:54:47.562284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.129 [2024-10-17 16:54:47.571708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.129 [2024-10-17 16:54:47.572105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.129 [2024-10-17 16:54:47.572138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.129 [2024-10-17 16:54:47.572156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.129 [2024-10-17 16:54:47.572394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.129 [2024-10-17 16:54:47.572635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.129 [2024-10-17 16:54:47.572660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.129 [2024-10-17 16:54:47.572676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.129 [2024-10-17 16:54:47.576232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.129 [2024-10-17 16:54:47.585656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.129 [2024-10-17 16:54:47.586053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.129 [2024-10-17 16:54:47.586086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.129 [2024-10-17 16:54:47.586104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.129 [2024-10-17 16:54:47.586342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.129 [2024-10-17 16:54:47.586583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.129 [2024-10-17 16:54:47.586608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.129 [2024-10-17 16:54:47.586624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.129 [2024-10-17 16:54:47.590196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.130 [2024-10-17 16:54:47.599620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.130 [2024-10-17 16:54:47.599994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.130 [2024-10-17 16:54:47.600033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.130 [2024-10-17 16:54:47.600052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.130 [2024-10-17 16:54:47.600289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.130 [2024-10-17 16:54:47.600530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.130 [2024-10-17 16:54:47.600555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.130 [2024-10-17 16:54:47.600573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.130 [2024-10-17 16:54:47.604132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.130 [2024-10-17 16:54:47.613573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.130 [2024-10-17 16:54:47.613974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.130 [2024-10-17 16:54:47.614013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.130 [2024-10-17 16:54:47.614035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.130 [2024-10-17 16:54:47.614272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.130 [2024-10-17 16:54:47.614513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.130 [2024-10-17 16:54:47.614539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.130 [2024-10-17 16:54:47.614554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.130 [2024-10-17 16:54:47.618110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.130 [2024-10-17 16:54:47.627534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.130 [2024-10-17 16:54:47.627921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.130 [2024-10-17 16:54:47.627954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.130 [2024-10-17 16:54:47.627973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.130 [2024-10-17 16:54:47.628223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.130 [2024-10-17 16:54:47.628465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.130 [2024-10-17 16:54:47.628490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.130 [2024-10-17 16:54:47.628506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.130 [2024-10-17 16:54:47.632057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.130 [2024-10-17 16:54:47.641488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.130 [2024-10-17 16:54:47.641890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.130 [2024-10-17 16:54:47.641923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.130 [2024-10-17 16:54:47.641942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.130 [2024-10-17 16:54:47.642199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.130 [2024-10-17 16:54:47.642441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.130 [2024-10-17 16:54:47.642466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.130 [2024-10-17 16:54:47.642483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.130 [2024-10-17 16:54:47.646034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.130 [2024-10-17 16:54:47.655454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.130 [2024-10-17 16:54:47.655844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.130 [2024-10-17 16:54:47.655877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.130 [2024-10-17 16:54:47.655896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.130 [2024-10-17 16:54:47.656146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.130 [2024-10-17 16:54:47.656396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.130 [2024-10-17 16:54:47.656422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.130 [2024-10-17 16:54:47.656439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.130 [2024-10-17 16:54:47.659986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.130 [2024-10-17 16:54:47.669416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.130 [2024-10-17 16:54:47.669805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.130 [2024-10-17 16:54:47.669837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.130 [2024-10-17 16:54:47.669855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.130 [2024-10-17 16:54:47.670103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.130 [2024-10-17 16:54:47.670344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.130 [2024-10-17 16:54:47.670369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.130 [2024-10-17 16:54:47.670386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.130 [2024-10-17 16:54:47.673934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.130 [2024-10-17 16:54:47.683366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.130 [2024-10-17 16:54:47.683761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.130 [2024-10-17 16:54:47.683792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.130 [2024-10-17 16:54:47.683811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.130 [2024-10-17 16:54:47.684060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.130 [2024-10-17 16:54:47.684302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.130 [2024-10-17 16:54:47.684327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.130 [2024-10-17 16:54:47.684349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.130 [2024-10-17 16:54:47.687898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.130 [2024-10-17 16:54:47.697346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.130 [2024-10-17 16:54:47.697716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.130 [2024-10-17 16:54:47.697747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.130 [2024-10-17 16:54:47.697768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.130 [2024-10-17 16:54:47.698019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.130 [2024-10-17 16:54:47.698260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.130 [2024-10-17 16:54:47.698283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.130 [2024-10-17 16:54:47.698298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.130 [2024-10-17 16:54:47.701843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.130 [2024-10-17 16:54:47.711266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.130 [2024-10-17 16:54:47.711625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.130 [2024-10-17 16:54:47.711657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.130 [2024-10-17 16:54:47.711676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.130 [2024-10-17 16:54:47.711913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.130 [2024-10-17 16:54:47.712166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.130 [2024-10-17 16:54:47.712192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.130 [2024-10-17 16:54:47.712208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.130 [2024-10-17 16:54:47.715754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.130 [2024-10-17 16:54:47.725200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.130 [2024-10-17 16:54:47.725579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.130 [2024-10-17 16:54:47.725610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.130 [2024-10-17 16:54:47.725629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.130 [2024-10-17 16:54:47.725866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.130 [2024-10-17 16:54:47.726118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.130 [2024-10-17 16:54:47.726145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.130 [2024-10-17 16:54:47.726162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.130 [2024-10-17 16:54:47.729710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.130 [2024-10-17 16:54:47.739144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.130 [2024-10-17 16:54:47.739531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.130 [2024-10-17 16:54:47.739568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.130 [2024-10-17 16:54:47.739587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.130 [2024-10-17 16:54:47.739824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.130 [2024-10-17 16:54:47.740077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.130 [2024-10-17 16:54:47.740103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.130 [2024-10-17 16:54:47.740119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.131 [2024-10-17 16:54:47.743665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.131 [2024-10-17 16:54:47.753094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.131 [2024-10-17 16:54:47.753458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.131 [2024-10-17 16:54:47.753490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.131 [2024-10-17 16:54:47.753509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.131 [2024-10-17 16:54:47.753746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.131 [2024-10-17 16:54:47.753998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.131 [2024-10-17 16:54:47.754036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.131 [2024-10-17 16:54:47.754053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.131 [2024-10-17 16:54:47.757601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.131 [2024-10-17 16:54:47.767023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.131 [2024-10-17 16:54:47.767417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.131 [2024-10-17 16:54:47.767449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.131 [2024-10-17 16:54:47.767467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.131 [2024-10-17 16:54:47.767705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.131 [2024-10-17 16:54:47.767946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.131 [2024-10-17 16:54:47.767971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.131 [2024-10-17 16:54:47.767988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.131 [2024-10-17 16:54:47.771543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.131 [2024-10-17 16:54:47.780964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.131 [2024-10-17 16:54:47.781338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.131 [2024-10-17 16:54:47.781371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.131 [2024-10-17 16:54:47.781389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.131 [2024-10-17 16:54:47.781626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.131 [2024-10-17 16:54:47.781873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.131 [2024-10-17 16:54:47.781899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.131 [2024-10-17 16:54:47.781915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.131 [2024-10-17 16:54:47.785471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.131 [2024-10-17 16:54:47.794915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.131 [2024-10-17 16:54:47.795289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.131 [2024-10-17 16:54:47.795323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.131 [2024-10-17 16:54:47.795342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.131 [2024-10-17 16:54:47.795580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.131 [2024-10-17 16:54:47.795820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.131 [2024-10-17 16:54:47.795845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.131 [2024-10-17 16:54:47.795861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.131 [2024-10-17 16:54:47.799416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.131 [2024-10-17 16:54:47.808852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.131 [2024-10-17 16:54:47.809225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.131 [2024-10-17 16:54:47.809257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.131 [2024-10-17 16:54:47.809276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.131 [2024-10-17 16:54:47.809513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.131 [2024-10-17 16:54:47.809755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.131 [2024-10-17 16:54:47.809778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.131 [2024-10-17 16:54:47.809794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.131 [2024-10-17 16:54:47.813348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.393 [2024-10-17 16:54:47.822787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.393 [2024-10-17 16:54:47.823187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.393 [2024-10-17 16:54:47.823218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.393 [2024-10-17 16:54:47.823237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.393 [2024-10-17 16:54:47.823475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.393 [2024-10-17 16:54:47.823716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.393 [2024-10-17 16:54:47.823739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.393 [2024-10-17 16:54:47.823755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.393 [2024-10-17 16:54:47.827179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.393 [2024-10-17 16:54:47.836390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.393 [2024-10-17 16:54:47.836722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.393 [2024-10-17 16:54:47.836749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.393 [2024-10-17 16:54:47.836765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.393 [2024-10-17 16:54:47.837029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.393 [2024-10-17 16:54:47.837259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.393 [2024-10-17 16:54:47.837295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.393 [2024-10-17 16:54:47.837323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.393 [2024-10-17 16:54:47.840562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.393 [2024-10-17 16:54:47.849902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.393 [2024-10-17 16:54:47.850340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.393 [2024-10-17 16:54:47.850387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.393 [2024-10-17 16:54:47.850404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.393 [2024-10-17 16:54:47.850659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.393 [2024-10-17 16:54:47.850857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.393 [2024-10-17 16:54:47.850877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.393 [2024-10-17 16:54:47.850889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.393 [2024-10-17 16:54:47.853946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.393 [2024-10-17 16:54:47.863187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.393 [2024-10-17 16:54:47.863583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.393 [2024-10-17 16:54:47.863612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.393 [2024-10-17 16:54:47.863628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.393 [2024-10-17 16:54:47.863871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.393 [2024-10-17 16:54:47.864132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.393 [2024-10-17 16:54:47.864154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.393 [2024-10-17 16:54:47.864167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.393 [2024-10-17 16:54:47.867209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.393 [2024-10-17 16:54:47.876503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.393 [2024-10-17 16:54:47.876942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.393 [2024-10-17 16:54:47.876971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.393 [2024-10-17 16:54:47.876992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.393 [2024-10-17 16:54:47.877246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.393 [2024-10-17 16:54:47.877461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.393 [2024-10-17 16:54:47.877480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.393 [2024-10-17 16:54:47.877492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.393 [2024-10-17 16:54:47.880454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.393 [2024-10-17 16:54:47.889685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.393 [2024-10-17 16:54:47.890117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.393 [2024-10-17 16:54:47.890146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.393 [2024-10-17 16:54:47.890162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.393 [2024-10-17 16:54:47.890415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.393 [2024-10-17 16:54:47.890614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.393 [2024-10-17 16:54:47.890633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.393 [2024-10-17 16:54:47.890645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.393 [2024-10-17 16:54:47.893581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.393 [2024-10-17 16:54:47.902959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.393 [2024-10-17 16:54:47.903317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.393 [2024-10-17 16:54:47.903361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.393 [2024-10-17 16:54:47.903378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.393 [2024-10-17 16:54:47.903598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.393 [2024-10-17 16:54:47.903812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.393 [2024-10-17 16:54:47.903831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.393 [2024-10-17 16:54:47.903843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.393 [2024-10-17 16:54:47.906810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.393 [2024-10-17 16:54:47.916235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.393 [2024-10-17 16:54:47.916648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.393 [2024-10-17 16:54:47.916691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.393 [2024-10-17 16:54:47.916706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.393 [2024-10-17 16:54:47.916959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.393 [2024-10-17 16:54:47.917185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.393 [2024-10-17 16:54:47.917212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.393 [2024-10-17 16:54:47.917225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.393 [2024-10-17 16:54:47.920188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.393 [2024-10-17 16:54:47.929420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.393 [2024-10-17 16:54:47.929734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.393 [2024-10-17 16:54:47.929776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.393 [2024-10-17 16:54:47.929792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.393 [2024-10-17 16:54:47.930022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.393 [2024-10-17 16:54:47.930269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.394 [2024-10-17 16:54:47.930290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.394 [2024-10-17 16:54:47.930305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.394 [2024-10-17 16:54:47.933414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.394 [2024-10-17 16:54:47.942721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.394 [2024-10-17 16:54:47.943112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.394 [2024-10-17 16:54:47.943141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.394 [2024-10-17 16:54:47.943157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.394 [2024-10-17 16:54:47.943386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.394 [2024-10-17 16:54:47.943600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.394 [2024-10-17 16:54:47.943619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.394 [2024-10-17 16:54:47.943631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.394 [2024-10-17 16:54:47.946810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.394 [2024-10-17 16:54:47.956060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.394 [2024-10-17 16:54:47.956439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.394 [2024-10-17 16:54:47.956468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.394 [2024-10-17 16:54:47.956484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.394 [2024-10-17 16:54:47.956713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.394 [2024-10-17 16:54:47.956970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.394 [2024-10-17 16:54:47.956991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.394 [2024-10-17 16:54:47.957014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.394 [2024-10-17 16:54:47.960397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.394 [2024-10-17 16:54:47.969378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.394 [2024-10-17 16:54:47.969814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.394 [2024-10-17 16:54:47.969842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.394 [2024-10-17 16:54:47.969858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.394 [2024-10-17 16:54:47.970080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.394 [2024-10-17 16:54:47.970312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.394 [2024-10-17 16:54:47.970331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.394 [2024-10-17 16:54:47.970343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.394 [2024-10-17 16:54:47.973463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.394 [2024-10-17 16:54:47.982597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.394 [2024-10-17 16:54:47.982965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.394 [2024-10-17 16:54:47.982993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.394 [2024-10-17 16:54:47.983021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.394 [2024-10-17 16:54:47.983251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.394 [2024-10-17 16:54:47.983483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.394 [2024-10-17 16:54:47.983502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.394 [2024-10-17 16:54:47.983515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.394 [2024-10-17 16:54:47.986479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.394 [2024-10-17 16:54:47.995891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.394 [2024-10-17 16:54:47.996405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.394 [2024-10-17 16:54:47.996433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.394 [2024-10-17 16:54:47.996450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.394 [2024-10-17 16:54:47.996691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.394 [2024-10-17 16:54:47.996888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.394 [2024-10-17 16:54:47.996907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.394 [2024-10-17 16:54:47.996919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.394 [2024-10-17 16:54:48.000442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.394 [2024-10-17 16:54:48.009848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.394 [2024-10-17 16:54:48.010248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.394 [2024-10-17 16:54:48.010277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.394 [2024-10-17 16:54:48.010310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.394 [2024-10-17 16:54:48.010583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.394 [2024-10-17 16:54:48.010776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.394 [2024-10-17 16:54:48.010794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.394 [2024-10-17 16:54:48.010807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.394 [2024-10-17 16:54:48.014349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.394 [2024-10-17 16:54:48.023736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.394 [2024-10-17 16:54:48.024106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.394 [2024-10-17 16:54:48.024136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.394 [2024-10-17 16:54:48.024169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.394 [2024-10-17 16:54:48.024426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.394 [2024-10-17 16:54:48.024618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.394 [2024-10-17 16:54:48.024637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.394 [2024-10-17 16:54:48.024649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.394 [2024-10-17 16:54:48.028135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.394 [2024-10-17 16:54:48.037742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.394 [2024-10-17 16:54:48.038113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.394 [2024-10-17 16:54:48.038145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.394 [2024-10-17 16:54:48.038163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.394 [2024-10-17 16:54:48.038399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.394 [2024-10-17 16:54:48.038641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.394 [2024-10-17 16:54:48.038664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.394 [2024-10-17 16:54:48.038679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.394 [2024-10-17 16:54:48.042234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.394 [2024-10-17 16:54:48.051652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.394 [2024-10-17 16:54:48.052022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.394 [2024-10-17 16:54:48.052054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.394 [2024-10-17 16:54:48.052072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.394 [2024-10-17 16:54:48.052308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.394 [2024-10-17 16:54:48.052550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.394 [2024-10-17 16:54:48.052573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.394 [2024-10-17 16:54:48.052595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.394 [2024-10-17 16:54:48.056155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.394 [2024-10-17 16:54:48.065587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.394 [2024-10-17 16:54:48.065949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.394 [2024-10-17 16:54:48.065980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.394 [2024-10-17 16:54:48.065998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.394 [2024-10-17 16:54:48.066248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.394 [2024-10-17 16:54:48.066489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.394 [2024-10-17 16:54:48.066521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.394 [2024-10-17 16:54:48.066535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.394 [2024-10-17 16:54:48.070153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.394 [2024-10-17 16:54:48.079609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.394 [2024-10-17 16:54:48.080021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.394 [2024-10-17 16:54:48.080078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.394 [2024-10-17 16:54:48.080096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.395 [2024-10-17 16:54:48.080333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.395 [2024-10-17 16:54:48.080573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.395 [2024-10-17 16:54:48.080596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.395 [2024-10-17 16:54:48.080611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.658 [2024-10-17 16:54:48.084181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.658 [2024-10-17 16:54:48.093440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.658 [2024-10-17 16:54:48.093801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.658 [2024-10-17 16:54:48.093832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.658 [2024-10-17 16:54:48.093850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.658 [2024-10-17 16:54:48.094098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.658 [2024-10-17 16:54:48.094340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.658 [2024-10-17 16:54:48.094364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.658 [2024-10-17 16:54:48.094379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.658 [2024-10-17 16:54:48.097929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.658 [2024-10-17 16:54:48.107373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.658 [2024-10-17 16:54:48.107750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.658 [2024-10-17 16:54:48.107781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.658 [2024-10-17 16:54:48.107799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.658 [2024-10-17 16:54:48.108049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.658 [2024-10-17 16:54:48.108290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.658 [2024-10-17 16:54:48.108313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.658 [2024-10-17 16:54:48.108328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.658 [2024-10-17 16:54:48.111878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.658 [2024-10-17 16:54:48.121313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.658 [2024-10-17 16:54:48.121675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.658 [2024-10-17 16:54:48.121706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.658 [2024-10-17 16:54:48.121724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.658 [2024-10-17 16:54:48.121960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.658 [2024-10-17 16:54:48.122212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.658 [2024-10-17 16:54:48.122237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.658 [2024-10-17 16:54:48.122252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.658 [2024-10-17 16:54:48.125800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.658 [2024-10-17 16:54:48.135258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.658 [2024-10-17 16:54:48.135624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.658 [2024-10-17 16:54:48.135657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.658 [2024-10-17 16:54:48.135675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.658 [2024-10-17 16:54:48.135912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.658 [2024-10-17 16:54:48.136167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.658 [2024-10-17 16:54:48.136191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.658 [2024-10-17 16:54:48.136206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.658 [2024-10-17 16:54:48.139754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.658 [2024-10-17 16:54:48.149189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.658 [2024-10-17 16:54:48.149556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.149587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.149605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.149847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.150103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.150128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.150143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.153691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.163139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.163500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.163531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.163549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.163785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.164039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.164064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.164078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.167626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.177056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.177444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.177474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.177492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.177728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.177970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.177993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.178021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.181570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.191024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.191386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.191418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.191436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.191672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 5411.75 IOPS, 21.14 MiB/s [2024-10-17T14:54:48.349Z] [2024-10-17 16:54:48.193644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.193666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.193688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.197278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.205032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.205390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.205421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.205439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.205675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.205917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.205940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.205954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.209517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.218961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.219367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.219399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.219417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.219653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.219894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.219918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.219933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.223487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.232916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.233336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.233367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.233385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.233622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.233863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.233886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.233901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.237460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.246889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.247261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.247298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.247317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.247553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.247794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.247817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.247832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.251392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.260857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.261231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.261262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.261280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.261517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.261758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.261781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.261796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.265353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.274784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.275159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.275191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.275209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.275445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.275687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.275710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.275725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.279283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.288708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.289084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.289116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.289134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.289370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.289618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.289642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.289656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.293231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.302663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.303034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.303066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.303084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.303321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.303561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.303584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.303600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.307159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.316588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.316973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.317011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.317031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.317268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.317509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.317532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.317547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.321106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.330560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.330922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.330953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.330971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.331219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.331461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.331484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.331499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.659 [2024-10-17 16:54:48.335065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.659 [2024-10-17 16:54:48.344506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.659 [2024-10-17 16:54:48.344896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.659 [2024-10-17 16:54:48.344928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.659 [2024-10-17 16:54:48.344946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.659 [2024-10-17 16:54:48.345195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.659 [2024-10-17 16:54:48.345437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.659 [2024-10-17 16:54:48.345461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.659 [2024-10-17 16:54:48.345476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.921 [2024-10-17 16:54:48.349036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.921 [2024-10-17 16:54:48.358479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.921 [2024-10-17 16:54:48.358868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.921 [2024-10-17 16:54:48.358899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.921 [2024-10-17 16:54:48.358917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.921 [2024-10-17 16:54:48.359166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.921 [2024-10-17 16:54:48.359408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.921 [2024-10-17 16:54:48.359432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.921 [2024-10-17 16:54:48.359447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.921 [2024-10-17 16:54:48.362992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.921 [2024-10-17 16:54:48.372427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.921 [2024-10-17 16:54:48.372786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.921 [2024-10-17 16:54:48.372817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.921 [2024-10-17 16:54:48.372835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.921 [2024-10-17 16:54:48.373084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.921 [2024-10-17 16:54:48.373326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.921 [2024-10-17 16:54:48.373349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.921 [2024-10-17 16:54:48.373364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.921 [2024-10-17 16:54:48.376913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.921 [2024-10-17 16:54:48.386354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.921 [2024-10-17 16:54:48.386723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.921 [2024-10-17 16:54:48.386754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.921 [2024-10-17 16:54:48.386780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.921 [2024-10-17 16:54:48.387030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.921 [2024-10-17 16:54:48.387271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.921 [2024-10-17 16:54:48.387295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.921 [2024-10-17 16:54:48.387309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.921 [2024-10-17 16:54:48.390889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.921 [2024-10-17 16:54:48.400345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.921 [2024-10-17 16:54:48.400692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.921 [2024-10-17 16:54:48.400723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.921 [2024-10-17 16:54:48.400741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.921 [2024-10-17 16:54:48.400977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.921 [2024-10-17 16:54:48.401230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.921 [2024-10-17 16:54:48.401254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.921 [2024-10-17 16:54:48.401269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.921 [2024-10-17 16:54:48.404816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.921 [2024-10-17 16:54:48.414259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.921 [2024-10-17 16:54:48.414651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.921 [2024-10-17 16:54:48.414682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.921 [2024-10-17 16:54:48.414700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.921 [2024-10-17 16:54:48.414937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.921 [2024-10-17 16:54:48.415190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.921 [2024-10-17 16:54:48.415215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.921 [2024-10-17 16:54:48.415230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.921 [2024-10-17 16:54:48.418777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.921 [2024-10-17 16:54:48.428231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.921 [2024-10-17 16:54:48.428620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.921 [2024-10-17 16:54:48.428650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.921 [2024-10-17 16:54:48.428668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.921 [2024-10-17 16:54:48.428904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.921 [2024-10-17 16:54:48.429160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.921 [2024-10-17 16:54:48.429191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.921 [2024-10-17 16:54:48.429207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.921 [2024-10-17 16:54:48.432756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.921 [2024-10-17 16:54:48.442195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.921 [2024-10-17 16:54:48.442565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.921 [2024-10-17 16:54:48.442596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.921 [2024-10-17 16:54:48.442615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.921 [2024-10-17 16:54:48.442851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.921 [2024-10-17 16:54:48.443105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.921 [2024-10-17 16:54:48.443129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.921 [2024-10-17 16:54:48.443144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.921 [2024-10-17 16:54:48.446692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.921 [2024-10-17 16:54:48.456134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.921 [2024-10-17 16:54:48.456528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.921 [2024-10-17 16:54:48.456559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.921 [2024-10-17 16:54:48.456577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.921 [2024-10-17 16:54:48.456814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.921 [2024-10-17 16:54:48.457067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.921 [2024-10-17 16:54:48.457092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.921 [2024-10-17 16:54:48.457107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.921 [2024-10-17 16:54:48.460652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.921 [2024-10-17 16:54:48.470094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.921 [2024-10-17 16:54:48.470479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.921 [2024-10-17 16:54:48.470510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.921 [2024-10-17 16:54:48.470528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.921 [2024-10-17 16:54:48.470764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.921 [2024-10-17 16:54:48.471017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.921 [2024-10-17 16:54:48.471052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.921 [2024-10-17 16:54:48.471067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.921 [2024-10-17 16:54:48.474614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.921 [2024-10-17 16:54:48.484074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.921 [2024-10-17 16:54:48.484436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.921 [2024-10-17 16:54:48.484468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.921 [2024-10-17 16:54:48.484486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.922 [2024-10-17 16:54:48.484723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.922 [2024-10-17 16:54:48.484964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.922 [2024-10-17 16:54:48.484988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.922 [2024-10-17 16:54:48.485015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.922 [2024-10-17 16:54:48.488566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.922 [2024-10-17 16:54:48.498036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.922 [2024-10-17 16:54:48.498430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.922 [2024-10-17 16:54:48.498461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.922 [2024-10-17 16:54:48.498478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.922 [2024-10-17 16:54:48.498715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.922 [2024-10-17 16:54:48.498956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.922 [2024-10-17 16:54:48.498980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.922 [2024-10-17 16:54:48.498995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.922 [2024-10-17 16:54:48.502554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.922 [2024-10-17 16:54:48.511991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.922 [2024-10-17 16:54:48.512388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.922 [2024-10-17 16:54:48.512420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.922 [2024-10-17 16:54:48.512438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.922 [2024-10-17 16:54:48.512675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.922 [2024-10-17 16:54:48.512916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.922 [2024-10-17 16:54:48.512939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.922 [2024-10-17 16:54:48.512954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.922 [2024-10-17 16:54:48.516514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.922 [2024-10-17 16:54:48.525933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.922 [2024-10-17 16:54:48.526310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.922 [2024-10-17 16:54:48.526341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.922 [2024-10-17 16:54:48.526365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.922 [2024-10-17 16:54:48.526604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.922 [2024-10-17 16:54:48.526845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.922 [2024-10-17 16:54:48.526868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.922 [2024-10-17 16:54:48.526883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.922 [2024-10-17 16:54:48.530443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.922 [2024-10-17 16:54:48.539887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.922 [2024-10-17 16:54:48.540286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.922 [2024-10-17 16:54:48.540318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.922 [2024-10-17 16:54:48.540336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.922 [2024-10-17 16:54:48.540573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.922 [2024-10-17 16:54:48.540814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.922 [2024-10-17 16:54:48.540838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.922 [2024-10-17 16:54:48.540853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.922 [2024-10-17 16:54:48.544414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.922 [2024-10-17 16:54:48.553853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.922 [2024-10-17 16:54:48.554224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.922 [2024-10-17 16:54:48.554255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.922 [2024-10-17 16:54:48.554273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.922 [2024-10-17 16:54:48.554523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.922 [2024-10-17 16:54:48.554766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.922 [2024-10-17 16:54:48.554790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.922 [2024-10-17 16:54:48.554805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.922 [2024-10-17 16:54:48.558367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.922 [2024-10-17 16:54:48.567798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.922 [2024-10-17 16:54:48.568195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.922 [2024-10-17 16:54:48.568226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.922 [2024-10-17 16:54:48.568245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.922 [2024-10-17 16:54:48.568482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.922 [2024-10-17 16:54:48.568723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.922 [2024-10-17 16:54:48.568746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.922 [2024-10-17 16:54:48.568767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.922 [2024-10-17 16:54:48.572326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.922 [2024-10-17 16:54:48.581751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.922 [2024-10-17 16:54:48.582141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.922 [2024-10-17 16:54:48.582172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.922 [2024-10-17 16:54:48.582190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.922 [2024-10-17 16:54:48.582427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.922 [2024-10-17 16:54:48.582668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.922 [2024-10-17 16:54:48.582691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.922 [2024-10-17 16:54:48.582706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.922 [2024-10-17 16:54:48.586264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.922 [2024-10-17 16:54:48.595709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.922 [2024-10-17 16:54:48.596071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.922 [2024-10-17 16:54:48.596103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:34.922 [2024-10-17 16:54:48.596121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:34.922 [2024-10-17 16:54:48.596357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:34.922 [2024-10-17 16:54:48.596597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.922 [2024-10-17 16:54:48.596621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.922 [2024-10-17 16:54:48.596635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.922 [2024-10-17 16:54:48.600196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.922 [2024-10-17 16:54:48.609630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.184 [2024-10-17 16:54:48.610018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.184 [2024-10-17 16:54:48.610051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.184 [2024-10-17 16:54:48.610069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.184 [2024-10-17 16:54:48.610306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.184 [2024-10-17 16:54:48.610547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.184 [2024-10-17 16:54:48.610571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.184 [2024-10-17 16:54:48.610586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.184 [2024-10-17 16:54:48.614154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.184 [2024-10-17 16:54:48.623591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.184 [2024-10-17 16:54:48.623991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.184 [2024-10-17 16:54:48.624030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.184 [2024-10-17 16:54:48.624049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.184 [2024-10-17 16:54:48.624285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.184 [2024-10-17 16:54:48.624525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.184 [2024-10-17 16:54:48.624548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.184 [2024-10-17 16:54:48.624564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.184 [2024-10-17 16:54:48.628122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.184 [2024-10-17 16:54:48.637552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.184 [2024-10-17 16:54:48.637933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.184 [2024-10-17 16:54:48.637963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.184 [2024-10-17 16:54:48.637981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.184 [2024-10-17 16:54:48.638229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.184 [2024-10-17 16:54:48.638470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.184 [2024-10-17 16:54:48.638493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.184 [2024-10-17 16:54:48.638508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.184 [2024-10-17 16:54:48.642064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.184 [2024-10-17 16:54:48.651491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.184 [2024-10-17 16:54:48.651931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.184 [2024-10-17 16:54:48.651985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.184 [2024-10-17 16:54:48.652013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.184 [2024-10-17 16:54:48.652252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.184 [2024-10-17 16:54:48.652493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.184 [2024-10-17 16:54:48.652516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.184 [2024-10-17 16:54:48.652530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.185 [2024-10-17 16:54:48.656086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.185 [2024-10-17 16:54:48.665513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.185 [2024-10-17 16:54:48.665937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.185 [2024-10-17 16:54:48.666008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.185 [2024-10-17 16:54:48.666028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.185 [2024-10-17 16:54:48.666271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.185 [2024-10-17 16:54:48.666512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.185 [2024-10-17 16:54:48.666536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.185 [2024-10-17 16:54:48.666550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.185 [2024-10-17 16:54:48.670104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.185 [2024-10-17 16:54:48.679323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.185 [2024-10-17 16:54:48.679682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.185 [2024-10-17 16:54:48.679713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.185 [2024-10-17 16:54:48.679730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.185 [2024-10-17 16:54:48.679966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.185 [2024-10-17 16:54:48.680218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.185 [2024-10-17 16:54:48.680243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.185 [2024-10-17 16:54:48.680258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.185 [2024-10-17 16:54:48.683804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.185 [2024-10-17 16:54:48.693254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.185 [2024-10-17 16:54:48.693613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.185 [2024-10-17 16:54:48.693644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.185 [2024-10-17 16:54:48.693661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.185 [2024-10-17 16:54:48.693897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.185 [2024-10-17 16:54:48.694151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.185 [2024-10-17 16:54:48.694176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.185 [2024-10-17 16:54:48.694191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.185 [2024-10-17 16:54:48.697739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.185 [2024-10-17 16:54:48.707185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.185 [2024-10-17 16:54:48.707568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.185 [2024-10-17 16:54:48.707599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.185 [2024-10-17 16:54:48.707617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.185 [2024-10-17 16:54:48.707854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.185 [2024-10-17 16:54:48.708107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.185 [2024-10-17 16:54:48.708132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.185 [2024-10-17 16:54:48.708153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.185 [2024-10-17 16:54:48.711704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.185 [2024-10-17 16:54:48.721150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.185 [2024-10-17 16:54:48.721511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.185 [2024-10-17 16:54:48.721542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.185 [2024-10-17 16:54:48.721560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.185 [2024-10-17 16:54:48.721797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.185 [2024-10-17 16:54:48.722048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.185 [2024-10-17 16:54:48.722072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.185 [2024-10-17 16:54:48.722087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.185 [2024-10-17 16:54:48.725636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.185 [2024-10-17 16:54:48.735292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.185 [2024-10-17 16:54:48.735664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.185 [2024-10-17 16:54:48.735696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.185 [2024-10-17 16:54:48.735714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.185 [2024-10-17 16:54:48.735951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.185 [2024-10-17 16:54:48.736206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.185 [2024-10-17 16:54:48.736230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.185 [2024-10-17 16:54:48.736245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.185 [2024-10-17 16:54:48.739791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.185 [2024-10-17 16:54:48.749225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.185 [2024-10-17 16:54:48.749600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.185 [2024-10-17 16:54:48.749631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.185 [2024-10-17 16:54:48.749649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.185 [2024-10-17 16:54:48.749886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.185 [2024-10-17 16:54:48.750140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.185 [2024-10-17 16:54:48.750164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.185 [2024-10-17 16:54:48.750180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.185 [2024-10-17 16:54:48.753728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.185 [2024-10-17 16:54:48.763178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.185 [2024-10-17 16:54:48.763564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.185 [2024-10-17 16:54:48.763600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.185 [2024-10-17 16:54:48.763619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.185 [2024-10-17 16:54:48.763855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.185 [2024-10-17 16:54:48.764110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.185 [2024-10-17 16:54:48.764134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.185 [2024-10-17 16:54:48.764149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.185 [2024-10-17 16:54:48.767697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.185 [2024-10-17 16:54:48.777130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.185 [2024-10-17 16:54:48.777499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.185 [2024-10-17 16:54:48.777530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.185 [2024-10-17 16:54:48.777548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.185 [2024-10-17 16:54:48.777785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.185 [2024-10-17 16:54:48.778038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.185 [2024-10-17 16:54:48.778062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.185 [2024-10-17 16:54:48.778077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.185 [2024-10-17 16:54:48.781622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.185 [2024-10-17 16:54:48.791054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.185 [2024-10-17 16:54:48.791440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.185 [2024-10-17 16:54:48.791471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.185 [2024-10-17 16:54:48.791489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.186 [2024-10-17 16:54:48.791725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.186 [2024-10-17 16:54:48.791966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.186 [2024-10-17 16:54:48.791990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.186 [2024-10-17 16:54:48.792016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.186 [2024-10-17 16:54:48.795581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.186 [2024-10-17 16:54:48.805017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.186 [2024-10-17 16:54:48.805402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.186 [2024-10-17 16:54:48.805433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.186 [2024-10-17 16:54:48.805451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.186 [2024-10-17 16:54:48.805688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.186 [2024-10-17 16:54:48.805938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.186 [2024-10-17 16:54:48.805962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.186 [2024-10-17 16:54:48.805977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.186 [2024-10-17 16:54:48.809534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.186 [2024-10-17 16:54:48.818963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.186 [2024-10-17 16:54:48.819403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.186 [2024-10-17 16:54:48.819457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.186 [2024-10-17 16:54:48.819475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.186 [2024-10-17 16:54:48.819711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.186 [2024-10-17 16:54:48.819951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.186 [2024-10-17 16:54:48.819974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.186 [2024-10-17 16:54:48.819989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.186 [2024-10-17 16:54:48.823549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.186 [2024-10-17 16:54:48.832766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.186 [2024-10-17 16:54:48.833142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.186 [2024-10-17 16:54:48.833173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.186 [2024-10-17 16:54:48.833191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.186 [2024-10-17 16:54:48.833428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.186 [2024-10-17 16:54:48.833668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.186 [2024-10-17 16:54:48.833691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.186 [2024-10-17 16:54:48.833706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.186 [2024-10-17 16:54:48.837266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.186 [2024-10-17 16:54:48.846687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.186 [2024-10-17 16:54:48.847094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.186 [2024-10-17 16:54:48.847125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.186 [2024-10-17 16:54:48.847143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.186 [2024-10-17 16:54:48.847379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.186 [2024-10-17 16:54:48.847620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.186 [2024-10-17 16:54:48.847644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.186 [2024-10-17 16:54:48.847658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.186 [2024-10-17 16:54:48.851225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.186 [2024-10-17 16:54:48.860658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.186 [2024-10-17 16:54:48.861043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.186 [2024-10-17 16:54:48.861075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.186 [2024-10-17 16:54:48.861093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.186 [2024-10-17 16:54:48.861329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.186 [2024-10-17 16:54:48.861571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.186 [2024-10-17 16:54:48.861595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.186 [2024-10-17 16:54:48.861609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.186 [2024-10-17 16:54:48.865168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.446 [2024-10-17 16:54:48.874602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.446 [2024-10-17 16:54:48.875056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.446 [2024-10-17 16:54:48.875088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.446 [2024-10-17 16:54:48.875106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.446 [2024-10-17 16:54:48.875343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.446 [2024-10-17 16:54:48.875584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.446 [2024-10-17 16:54:48.875608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.446 [2024-10-17 16:54:48.875623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.446 [2024-10-17 16:54:48.879189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.446 [2024-10-17 16:54:48.888622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.446 [2024-10-17 16:54:48.888994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.446 [2024-10-17 16:54:48.889032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.446 [2024-10-17 16:54:48.889050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.446 [2024-10-17 16:54:48.889287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.447 [2024-10-17 16:54:48.889528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.447 [2024-10-17 16:54:48.889551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.447 [2024-10-17 16:54:48.889567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.447 [2024-10-17 16:54:48.893125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.447 [2024-10-17 16:54:48.902566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.447 [2024-10-17 16:54:48.902929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.447 [2024-10-17 16:54:48.902960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.447 [2024-10-17 16:54:48.902985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.447 [2024-10-17 16:54:48.903232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.447 [2024-10-17 16:54:48.903473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.447 [2024-10-17 16:54:48.903497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.447 [2024-10-17 16:54:48.903511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.447 [2024-10-17 16:54:48.907066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.447 [2024-10-17 16:54:48.916488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.447 [2024-10-17 16:54:48.916869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.447 [2024-10-17 16:54:48.916900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.447 [2024-10-17 16:54:48.916917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.447 [2024-10-17 16:54:48.917166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.447 [2024-10-17 16:54:48.917408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.447 [2024-10-17 16:54:48.917431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.447 [2024-10-17 16:54:48.917446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.447 [2024-10-17 16:54:48.921021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.447 [2024-10-17 16:54:48.930450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.447 [2024-10-17 16:54:48.930814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.447 [2024-10-17 16:54:48.930847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.447 [2024-10-17 16:54:48.930865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.447 [2024-10-17 16:54:48.931115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.447 [2024-10-17 16:54:48.931357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.447 [2024-10-17 16:54:48.931380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.447 [2024-10-17 16:54:48.931395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.447 [2024-10-17 16:54:48.934946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.447 [2024-10-17 16:54:48.944382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.447 [2024-10-17 16:54:48.944767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.447 [2024-10-17 16:54:48.944797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.447 [2024-10-17 16:54:48.944815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.447 [2024-10-17 16:54:48.945063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.447 [2024-10-17 16:54:48.945305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.447 [2024-10-17 16:54:48.945335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.447 [2024-10-17 16:54:48.945351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.447 [2024-10-17 16:54:48.948898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.447 [2024-10-17 16:54:48.958360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.447 [2024-10-17 16:54:48.958720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.447 [2024-10-17 16:54:48.958751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.447 [2024-10-17 16:54:48.958768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.447 [2024-10-17 16:54:48.959014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.447 [2024-10-17 16:54:48.959257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.447 [2024-10-17 16:54:48.959280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.447 [2024-10-17 16:54:48.959295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.447 [2024-10-17 16:54:48.962841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.447 [2024-10-17 16:54:48.972327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.447 [2024-10-17 16:54:48.972699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.447 [2024-10-17 16:54:48.972730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.447 [2024-10-17 16:54:48.972750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.447 [2024-10-17 16:54:48.972988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.447 [2024-10-17 16:54:48.973241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.447 [2024-10-17 16:54:48.973264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.447 [2024-10-17 16:54:48.973279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.447 [2024-10-17 16:54:48.976829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.447 [2024-10-17 16:54:48.986284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.447 [2024-10-17 16:54:48.986680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.447 [2024-10-17 16:54:48.986711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.447 [2024-10-17 16:54:48.986729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.447 [2024-10-17 16:54:48.986966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.447 [2024-10-17 16:54:48.987217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.447 [2024-10-17 16:54:48.987242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.447 [2024-10-17 16:54:48.987257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.447 [2024-10-17 16:54:48.990831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.447 [2024-10-17 16:54:49.000312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.447 [2024-10-17 16:54:49.000705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.447 [2024-10-17 16:54:49.000736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.447 [2024-10-17 16:54:49.000754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.447 [2024-10-17 16:54:49.000991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.447 [2024-10-17 16:54:49.001246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.447 [2024-10-17 16:54:49.001279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.447 [2024-10-17 16:54:49.001293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.447 [2024-10-17 16:54:49.004844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.447 [2024-10-17 16:54:49.014301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.447 [2024-10-17 16:54:49.014671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.447 [2024-10-17 16:54:49.014702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.447 [2024-10-17 16:54:49.014721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.447 [2024-10-17 16:54:49.014959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.447 [2024-10-17 16:54:49.015212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.447 [2024-10-17 16:54:49.015237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.447 [2024-10-17 16:54:49.015253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.447 [2024-10-17 16:54:49.018810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.447 [2024-10-17 16:54:49.028266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.447 [2024-10-17 16:54:49.028652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.447 [2024-10-17 16:54:49.028683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.447 [2024-10-17 16:54:49.028701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.447 [2024-10-17 16:54:49.028937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.447 [2024-10-17 16:54:49.029190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.447 [2024-10-17 16:54:49.029215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.447 [2024-10-17 16:54:49.029230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.447 [2024-10-17 16:54:49.032784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.447 [2024-10-17 16:54:49.042249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.447 [2024-10-17 16:54:49.042610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.448 [2024-10-17 16:54:49.042642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.448 [2024-10-17 16:54:49.042660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.448 [2024-10-17 16:54:49.042904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.448 [2024-10-17 16:54:49.043163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.448 [2024-10-17 16:54:49.043189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.448 [2024-10-17 16:54:49.043204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.448 [2024-10-17 16:54:49.046752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.448 [2024-10-17 16:54:49.056205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.448 [2024-10-17 16:54:49.056599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.448 [2024-10-17 16:54:49.056630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.448 [2024-10-17 16:54:49.056648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.448 [2024-10-17 16:54:49.056885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.448 [2024-10-17 16:54:49.057138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.448 [2024-10-17 16:54:49.057162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.448 [2024-10-17 16:54:49.057177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.448 [2024-10-17 16:54:49.060728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.448 [2024-10-17 16:54:49.070190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.448 [2024-10-17 16:54:49.070572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.448 [2024-10-17 16:54:49.070604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.448 [2024-10-17 16:54:49.070622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.448 [2024-10-17 16:54:49.070858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.448 [2024-10-17 16:54:49.071112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.448 [2024-10-17 16:54:49.071137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.448 [2024-10-17 16:54:49.071152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.448 [2024-10-17 16:54:49.074699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.448 [2024-10-17 16:54:49.084151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.448 [2024-10-17 16:54:49.084527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.448 [2024-10-17 16:54:49.084558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.448 [2024-10-17 16:54:49.084575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.448 [2024-10-17 16:54:49.084811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.448 [2024-10-17 16:54:49.085064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.448 [2024-10-17 16:54:49.085088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.448 [2024-10-17 16:54:49.085109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.448 [2024-10-17 16:54:49.088659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.448 [2024-10-17 16:54:49.098133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.448 [2024-10-17 16:54:49.098495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.448 [2024-10-17 16:54:49.098527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.448 [2024-10-17 16:54:49.098545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.448 [2024-10-17 16:54:49.098783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.448 [2024-10-17 16:54:49.099034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.448 [2024-10-17 16:54:49.099057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.448 [2024-10-17 16:54:49.099071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.448 [2024-10-17 16:54:49.102859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.448 [2024-10-17 16:54:49.112104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.448 [2024-10-17 16:54:49.112477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.448 [2024-10-17 16:54:49.112510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.448 [2024-10-17 16:54:49.112528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.448 [2024-10-17 16:54:49.112766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.448 [2024-10-17 16:54:49.113018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.448 [2024-10-17 16:54:49.113042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.448 [2024-10-17 16:54:49.113057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.448 [2024-10-17 16:54:49.116605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.448 [2024-10-17 16:54:49.126057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.448 [2024-10-17 16:54:49.126424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.448 [2024-10-17 16:54:49.126456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.448 [2024-10-17 16:54:49.126474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.448 [2024-10-17 16:54:49.126711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.448 [2024-10-17 16:54:49.126952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.448 [2024-10-17 16:54:49.126975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.448 [2024-10-17 16:54:49.126990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.448 [2024-10-17 16:54:49.130592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.707 [2024-10-17 16:54:49.140061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.707 [2024-10-17 16:54:49.140439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.707 [2024-10-17 16:54:49.140472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.707 [2024-10-17 16:54:49.140490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.707 [2024-10-17 16:54:49.140727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.707 [2024-10-17 16:54:49.140968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.707 [2024-10-17 16:54:49.140992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.707 [2024-10-17 16:54:49.141021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.707 [2024-10-17 16:54:49.144572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.707 [2024-10-17 16:54:49.154032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.707 [2024-10-17 16:54:49.154404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.707 [2024-10-17 16:54:49.154436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.707 [2024-10-17 16:54:49.154455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.707 [2024-10-17 16:54:49.154707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.707 [2024-10-17 16:54:49.154950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.707 [2024-10-17 16:54:49.154973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.707 [2024-10-17 16:54:49.154989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.707 [2024-10-17 16:54:49.158547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.707 [2024-10-17 16:54:49.167981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.707 [2024-10-17 16:54:49.168392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.707 [2024-10-17 16:54:49.168424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.707 [2024-10-17 16:54:49.168442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.707 [2024-10-17 16:54:49.168678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.707 [2024-10-17 16:54:49.168919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.707 [2024-10-17 16:54:49.168942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.707 [2024-10-17 16:54:49.168958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.708 [2024-10-17 16:54:49.172516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.708 [2024-10-17 16:54:49.181949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.708 [2024-10-17 16:54:49.182293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.708 [2024-10-17 16:54:49.182321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.708 [2024-10-17 16:54:49.182338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.708 [2024-10-17 16:54:49.182573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.708 [2024-10-17 16:54:49.182793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.708 [2024-10-17 16:54:49.182813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.708 [2024-10-17 16:54:49.182826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.708 [2024-10-17 16:54:49.186197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.708 4329.40 IOPS, 16.91 MiB/s [2024-10-17T14:54:49.398Z] [2024-10-17 16:54:49.196849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.708 [2024-10-17 16:54:49.197243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.708 [2024-10-17 16:54:49.197272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.708 [2024-10-17 16:54:49.197289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.708 [2024-10-17 16:54:49.197539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.708 [2024-10-17 16:54:49.197737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.708 [2024-10-17 16:54:49.197756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.708 [2024-10-17 16:54:49.197768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.708 [2024-10-17 16:54:49.201109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.708 [2024-10-17 16:54:49.210675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.708 [2024-10-17 16:54:49.211126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.708 [2024-10-17 16:54:49.211172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.708 [2024-10-17 16:54:49.211189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.708 [2024-10-17 16:54:49.211464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.708 [2024-10-17 16:54:49.211656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.708 [2024-10-17 16:54:49.211675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.708 [2024-10-17 16:54:49.211687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.708 [2024-10-17 16:54:49.215195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.708 [2024-10-17 16:54:49.224678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.708 [2024-10-17 16:54:49.225073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.708 [2024-10-17 16:54:49.225104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.708 [2024-10-17 16:54:49.225122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.708 [2024-10-17 16:54:49.225359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.708 [2024-10-17 16:54:49.225600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.708 [2024-10-17 16:54:49.225623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.708 [2024-10-17 16:54:49.225645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.708 [2024-10-17 16:54:49.229208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.708 [2024-10-17 16:54:49.238658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.708 [2024-10-17 16:54:49.239027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.708 [2024-10-17 16:54:49.239059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.708 [2024-10-17 16:54:49.239077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.708 [2024-10-17 16:54:49.239314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.708 [2024-10-17 16:54:49.239555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.708 [2024-10-17 16:54:49.239578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.708 [2024-10-17 16:54:49.239592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.708 [2024-10-17 16:54:49.243143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.708 [2024-10-17 16:54:49.252570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.708 [2024-10-17 16:54:49.252958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.708 [2024-10-17 16:54:49.252988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.708 [2024-10-17 16:54:49.253018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.708 [2024-10-17 16:54:49.253258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.708 [2024-10-17 16:54:49.253498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.708 [2024-10-17 16:54:49.253521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.708 [2024-10-17 16:54:49.253536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.708 [2024-10-17 16:54:49.257092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.708 [2024-10-17 16:54:49.266513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.708 [2024-10-17 16:54:49.266877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.708 [2024-10-17 16:54:49.266908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.708 [2024-10-17 16:54:49.266926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.708 [2024-10-17 16:54:49.267174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.708 [2024-10-17 16:54:49.267415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.708 [2024-10-17 16:54:49.267439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.708 [2024-10-17 16:54:49.267454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.708 [2024-10-17 16:54:49.270999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.708 [2024-10-17 16:54:49.280436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.708 [2024-10-17 16:54:49.280779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.708 [2024-10-17 16:54:49.280815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.708 [2024-10-17 16:54:49.280834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.708 [2024-10-17 16:54:49.281083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.708 [2024-10-17 16:54:49.281325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.708 [2024-10-17 16:54:49.281348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.708 [2024-10-17 16:54:49.281364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.708 [2024-10-17 16:54:49.284909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.708 [2024-10-17 16:54:49.294343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.708 [2024-10-17 16:54:49.294707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.708 [2024-10-17 16:54:49.294737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.708 [2024-10-17 16:54:49.294755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.708 [2024-10-17 16:54:49.294991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.708 [2024-10-17 16:54:49.295243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.708 [2024-10-17 16:54:49.295267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.708 [2024-10-17 16:54:49.295282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.708 [2024-10-17 16:54:49.298840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.708 [2024-10-17 16:54:49.308271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.708 [2024-10-17 16:54:49.308656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.708 [2024-10-17 16:54:49.308687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.708 [2024-10-17 16:54:49.308705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.708 [2024-10-17 16:54:49.308943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.708 [2024-10-17 16:54:49.309195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.708 [2024-10-17 16:54:49.309219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.708 [2024-10-17 16:54:49.309233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.708 [2024-10-17 16:54:49.312779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.708 [2024-10-17 16:54:49.322206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.708 [2024-10-17 16:54:49.322574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.708 [2024-10-17 16:54:49.322605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.708 [2024-10-17 16:54:49.322623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.708 [2024-10-17 16:54:49.322859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.708 [2024-10-17 16:54:49.323117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.709 [2024-10-17 16:54:49.323142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.709 [2024-10-17 16:54:49.323157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.709 [2024-10-17 16:54:49.326701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.709 [2024-10-17 16:54:49.336140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.709 [2024-10-17 16:54:49.336539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.709 [2024-10-17 16:54:49.336570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.709 [2024-10-17 16:54:49.336588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.709 [2024-10-17 16:54:49.336824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.709 [2024-10-17 16:54:49.337077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.709 [2024-10-17 16:54:49.337102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.709 [2024-10-17 16:54:49.337116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.709 [2024-10-17 16:54:49.340660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.709 [2024-10-17 16:54:49.350114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.709 [2024-10-17 16:54:49.350503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.709 [2024-10-17 16:54:49.350534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.709 [2024-10-17 16:54:49.350552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.709 [2024-10-17 16:54:49.350788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.709 [2024-10-17 16:54:49.351041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.709 [2024-10-17 16:54:49.351065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.709 [2024-10-17 16:54:49.351081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.709 [2024-10-17 16:54:49.354625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.709 [2024-10-17 16:54:49.364061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.709 [2024-10-17 16:54:49.364464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.709 [2024-10-17 16:54:49.364495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.709 [2024-10-17 16:54:49.364513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.709 [2024-10-17 16:54:49.364750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.709 [2024-10-17 16:54:49.364991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.709 [2024-10-17 16:54:49.365025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.709 [2024-10-17 16:54:49.365041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.709 [2024-10-17 16:54:49.368592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.709 [2024-10-17 16:54:49.378018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.709 [2024-10-17 16:54:49.378391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.709 [2024-10-17 16:54:49.378422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.709 [2024-10-17 16:54:49.378439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.709 [2024-10-17 16:54:49.378676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.709 [2024-10-17 16:54:49.378917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.709 [2024-10-17 16:54:49.378941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.709 [2024-10-17 16:54:49.378956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.709 [2024-10-17 16:54:49.382512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.709 [2024-10-17 16:54:49.391942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.709 [2024-10-17 16:54:49.392306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.709 [2024-10-17 16:54:49.392337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.709 [2024-10-17 16:54:49.392355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.709 [2024-10-17 16:54:49.392591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.709 [2024-10-17 16:54:49.392833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.709 [2024-10-17 16:54:49.392857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.709 [2024-10-17 16:54:49.392871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.709 [2024-10-17 16:54:49.396447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.968 [2024-10-17 16:54:49.405882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.968 [2024-10-17 16:54:49.406254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.968 [2024-10-17 16:54:49.406286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.968 [2024-10-17 16:54:49.406304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.968 [2024-10-17 16:54:49.406541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.968 [2024-10-17 16:54:49.406781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.968 [2024-10-17 16:54:49.406805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.968 [2024-10-17 16:54:49.406820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.968 [2024-10-17 16:54:49.410374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.968 [2024-10-17 16:54:49.419873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.968 [2024-10-17 16:54:49.420253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.968 [2024-10-17 16:54:49.420285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.968 [2024-10-17 16:54:49.420309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.968 [2024-10-17 16:54:49.420546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.968 [2024-10-17 16:54:49.420787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.968 [2024-10-17 16:54:49.420810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.968 [2024-10-17 16:54:49.420825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.968 [2024-10-17 16:54:49.424382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.968 [2024-10-17 16:54:49.433811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.968 [2024-10-17 16:54:49.434201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.968 [2024-10-17 16:54:49.434232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.968 [2024-10-17 16:54:49.434250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.968 [2024-10-17 16:54:49.434487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.968 [2024-10-17 16:54:49.434728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.968 [2024-10-17 16:54:49.434751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.968 [2024-10-17 16:54:49.434766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.968 [2024-10-17 16:54:49.438320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.969 [2024-10-17 16:54:49.447743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.969 [2024-10-17 16:54:49.448133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.969 [2024-10-17 16:54:49.448164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.969 [2024-10-17 16:54:49.448182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.969 [2024-10-17 16:54:49.448419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.969 [2024-10-17 16:54:49.448659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.969 [2024-10-17 16:54:49.448682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.969 [2024-10-17 16:54:49.448697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.969 [2024-10-17 16:54:49.452251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.969 [2024-10-17 16:54:49.461673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.969 [2024-10-17 16:54:49.462033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.969 [2024-10-17 16:54:49.462065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.969 [2024-10-17 16:54:49.462083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.969 [2024-10-17 16:54:49.462320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.969 [2024-10-17 16:54:49.462561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.969 [2024-10-17 16:54:49.462591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.969 [2024-10-17 16:54:49.462607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.969 [2024-10-17 16:54:49.466161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.969 [2024-10-17 16:54:49.475587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.969 [2024-10-17 16:54:49.475951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.969 [2024-10-17 16:54:49.475982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.969 [2024-10-17 16:54:49.476008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.969 [2024-10-17 16:54:49.476256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.969 [2024-10-17 16:54:49.476497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.969 [2024-10-17 16:54:49.476520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.969 [2024-10-17 16:54:49.476535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.969 [2024-10-17 16:54:49.480089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.969 [2024-10-17 16:54:49.489510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.969 [2024-10-17 16:54:49.489897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.969 [2024-10-17 16:54:49.489927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.969 [2024-10-17 16:54:49.489945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.969 [2024-10-17 16:54:49.490191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.969 [2024-10-17 16:54:49.490433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.969 [2024-10-17 16:54:49.490456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.969 [2024-10-17 16:54:49.490471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.969 [2024-10-17 16:54:49.494025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.969 [2024-10-17 16:54:49.503466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.969 [2024-10-17 16:54:49.503826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.969 [2024-10-17 16:54:49.503857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.969 [2024-10-17 16:54:49.503875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.969 [2024-10-17 16:54:49.504124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.969 [2024-10-17 16:54:49.504365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.969 [2024-10-17 16:54:49.504389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.969 [2024-10-17 16:54:49.504403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.969 [2024-10-17 16:54:49.507952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.969 [2024-10-17 16:54:49.517395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.969 [2024-10-17 16:54:49.517778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.969 [2024-10-17 16:54:49.517809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.969 [2024-10-17 16:54:49.517827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.969 [2024-10-17 16:54:49.518075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.969 [2024-10-17 16:54:49.518317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.969 [2024-10-17 16:54:49.518341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.969 [2024-10-17 16:54:49.518355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.969 [2024-10-17 16:54:49.521901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.969 [2024-10-17 16:54:49.531336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.969 [2024-10-17 16:54:49.531722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.969 [2024-10-17 16:54:49.531754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.969 [2024-10-17 16:54:49.531772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.969 [2024-10-17 16:54:49.532020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.969 [2024-10-17 16:54:49.532261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.969 [2024-10-17 16:54:49.532285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.969 [2024-10-17 16:54:49.532300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.969 [2024-10-17 16:54:49.535851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.969 [2024-10-17 16:54:49.545284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.969 [2024-10-17 16:54:49.545647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.969 [2024-10-17 16:54:49.545678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.969 [2024-10-17 16:54:49.545695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.969 [2024-10-17 16:54:49.545932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.969 [2024-10-17 16:54:49.546184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.969 [2024-10-17 16:54:49.546207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.969 [2024-10-17 16:54:49.546223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.969 [2024-10-17 16:54:49.549768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.969 [2024-10-17 16:54:49.559216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.969 [2024-10-17 16:54:49.559603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.969 [2024-10-17 16:54:49.559635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.969 [2024-10-17 16:54:49.559653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.969 [2024-10-17 16:54:49.559895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.969 [2024-10-17 16:54:49.560147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.969 [2024-10-17 16:54:49.560171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.969 [2024-10-17 16:54:49.560186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.969 [2024-10-17 16:54:49.563728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.969 [2024-10-17 16:54:49.573155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.969 [2024-10-17 16:54:49.573539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.969 [2024-10-17 16:54:49.573570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.969 [2024-10-17 16:54:49.573588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.969 [2024-10-17 16:54:49.573824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.969 [2024-10-17 16:54:49.574076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.969 [2024-10-17 16:54:49.574101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.969 [2024-10-17 16:54:49.574116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.969 [2024-10-17 16:54:49.577658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.969 [2024-10-17 16:54:49.587084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.969 [2024-10-17 16:54:49.587501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.969 [2024-10-17 16:54:49.587532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.969 [2024-10-17 16:54:49.587550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.969 [2024-10-17 16:54:49.587786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.969 [2024-10-17 16:54:49.588041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.969 [2024-10-17 16:54:49.588066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.970 [2024-10-17 16:54:49.588080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.970 [2024-10-17 16:54:49.591625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.970 [2024-10-17 16:54:49.601075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.970 [2024-10-17 16:54:49.601460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.970 [2024-10-17 16:54:49.601491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.970 [2024-10-17 16:54:49.601509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.970 [2024-10-17 16:54:49.601745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.970 [2024-10-17 16:54:49.601986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.970 [2024-10-17 16:54:49.602021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.970 [2024-10-17 16:54:49.602043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.970 [2024-10-17 16:54:49.605588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.970 [2024-10-17 16:54:49.615018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.970 [2024-10-17 16:54:49.615398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.970 [2024-10-17 16:54:49.615429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.970 [2024-10-17 16:54:49.615447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.970 [2024-10-17 16:54:49.615683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.970 [2024-10-17 16:54:49.615923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.970 [2024-10-17 16:54:49.615947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.970 [2024-10-17 16:54:49.615962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.970 [2024-10-17 16:54:49.619516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.970 [2024-10-17 16:54:49.628940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.970 [2024-10-17 16:54:49.629310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.970 [2024-10-17 16:54:49.629341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.970 [2024-10-17 16:54:49.629359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.970 [2024-10-17 16:54:49.629596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.970 [2024-10-17 16:54:49.629837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.970 [2024-10-17 16:54:49.629860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.970 [2024-10-17 16:54:49.629875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.970 [2024-10-17 16:54:49.633435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.970 [2024-10-17 16:54:49.642858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.970 [2024-10-17 16:54:49.643249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.970 [2024-10-17 16:54:49.643281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.970 [2024-10-17 16:54:49.643299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.970 [2024-10-17 16:54:49.643535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.970 [2024-10-17 16:54:49.643776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.970 [2024-10-17 16:54:49.643799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.970 [2024-10-17 16:54:49.643814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.970 [2024-10-17 16:54:49.647369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.970 [2024-10-17 16:54:49.656795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.970 [2024-10-17 16:54:49.657201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.970 [2024-10-17 16:54:49.657233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:35.970 [2024-10-17 16:54:49.657251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:35.970 [2024-10-17 16:54:49.657488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:35.970 [2024-10-17 16:54:49.657730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.970 [2024-10-17 16:54:49.657753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.970 [2024-10-17 16:54:49.657768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.229 [2024-10-17 16:54:49.661324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.229 [2024-10-17 16:54:49.670748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.229 [2024-10-17 16:54:49.671118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.229 [2024-10-17 16:54:49.671149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.229 [2024-10-17 16:54:49.671167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.229 [2024-10-17 16:54:49.671404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.229 [2024-10-17 16:54:49.671644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.229 [2024-10-17 16:54:49.671668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.229 [2024-10-17 16:54:49.671683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.229 [2024-10-17 16:54:49.675237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.229 [2024-10-17 16:54:49.684660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.229 [2024-10-17 16:54:49.685034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.229 [2024-10-17 16:54:49.685065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.229 [2024-10-17 16:54:49.685083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.229 [2024-10-17 16:54:49.685320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.229 [2024-10-17 16:54:49.685560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.229 [2024-10-17 16:54:49.685584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.229 [2024-10-17 16:54:49.685598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.229 [2024-10-17 16:54:49.689154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2468758 Killed "${NVMF_APP[@]}" "$@" 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2469723 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2469723 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2469723 ']' 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.229 16:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.229 [2024-10-17 16:54:49.698602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.229 [2024-10-17 16:54:49.698988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.229 [2024-10-17 16:54:49.699026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.229 [2024-10-17 16:54:49.699045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.229 [2024-10-17 16:54:49.699282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.229 [2024-10-17 16:54:49.699524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.229 [2024-10-17 16:54:49.699548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.229 [2024-10-17 16:54:49.699564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.229 [2024-10-17 16:54:49.703115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.229 [2024-10-17 16:54:49.712543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.229 [2024-10-17 16:54:49.712912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.229 [2024-10-17 16:54:49.712944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.229 [2024-10-17 16:54:49.712962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.230 [2024-10-17 16:54:49.713208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.230 [2024-10-17 16:54:49.713450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.230 [2024-10-17 16:54:49.713474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.230 [2024-10-17 16:54:49.713489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.230 [2024-10-17 16:54:49.717038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.230 [2024-10-17 16:54:49.726469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.230 [2024-10-17 16:54:49.726829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.230 [2024-10-17 16:54:49.726861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.230 [2024-10-17 16:54:49.726879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.230 [2024-10-17 16:54:49.727140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.230 [2024-10-17 16:54:49.727383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.230 [2024-10-17 16:54:49.727406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.230 [2024-10-17 16:54:49.727422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.230 [2024-10-17 16:54:49.730966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.230 [2024-10-17 16:54:49.740424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.230 [2024-10-17 16:54:49.740840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.230 [2024-10-17 16:54:49.740871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.230 [2024-10-17 16:54:49.740900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.230 [2024-10-17 16:54:49.741147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.230 [2024-10-17 16:54:49.741389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.230 [2024-10-17 16:54:49.741413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.230 [2024-10-17 16:54:49.741429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.230 [2024-10-17 16:54:49.744970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.230 [2024-10-17 16:54:49.748758] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:36.230 [2024-10-17 16:54:49.748828] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.230 [2024-10-17 16:54:49.754350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.230 [2024-10-17 16:54:49.754759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.230 [2024-10-17 16:54:49.754791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.230 [2024-10-17 16:54:49.754820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.230 [2024-10-17 16:54:49.755080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.230 [2024-10-17 16:54:49.755324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.230 [2024-10-17 16:54:49.755348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.230 [2024-10-17 16:54:49.755364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.230 [2024-10-17 16:54:49.758906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.230 [2024-10-17 16:54:49.768336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.230 [2024-10-17 16:54:49.768735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.230 [2024-10-17 16:54:49.768767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.230 [2024-10-17 16:54:49.768786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.230 [2024-10-17 16:54:49.769050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.230 [2024-10-17 16:54:49.769292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.230 [2024-10-17 16:54:49.769315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.230 [2024-10-17 16:54:49.769331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.230 [2024-10-17 16:54:49.772887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.230 [2024-10-17 16:54:49.782321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.230 [2024-10-17 16:54:49.782692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.230 [2024-10-17 16:54:49.782724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.230 [2024-10-17 16:54:49.782742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.230 [2024-10-17 16:54:49.782978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.230 [2024-10-17 16:54:49.783229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.230 [2024-10-17 16:54:49.783254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.230 [2024-10-17 16:54:49.783269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.230 [2024-10-17 16:54:49.786384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.230 [2024-10-17 16:54:49.796237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.230 [2024-10-17 16:54:49.796694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.230 [2024-10-17 16:54:49.796723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.230 [2024-10-17 16:54:49.796746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.230 [2024-10-17 16:54:49.796995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.230 [2024-10-17 16:54:49.797247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.230 [2024-10-17 16:54:49.797269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.230 [2024-10-17 16:54:49.797299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.230 [2024-10-17 16:54:49.800828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.230 [2024-10-17 16:54:49.810050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.230 [2024-10-17 16:54:49.810397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.230 [2024-10-17 16:54:49.810439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.230 [2024-10-17 16:54:49.810456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.230 [2024-10-17 16:54:49.810691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.230 [2024-10-17 16:54:49.810941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.230 [2024-10-17 16:54:49.810965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.230 [2024-10-17 16:54:49.810997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.230 [2024-10-17 16:54:49.814485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.230 [2024-10-17 16:54:49.821572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:36.230 [2024-10-17 16:54:49.823901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.230 [2024-10-17 16:54:49.824280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.230 [2024-10-17 16:54:49.824310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.230 [2024-10-17 16:54:49.824341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.230 [2024-10-17 16:54:49.824574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.230 [2024-10-17 16:54:49.824816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.230 [2024-10-17 16:54:49.824840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.230 [2024-10-17 16:54:49.824856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.230 [2024-10-17 16:54:49.828330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.230 [2024-10-17 16:54:49.837826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.230 [2024-10-17 16:54:49.838424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.230 [2024-10-17 16:54:49.838486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.230 [2024-10-17 16:54:49.838507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.230 [2024-10-17 16:54:49.838767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.230 [2024-10-17 16:54:49.839025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.230 [2024-10-17 16:54:49.839050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.230 [2024-10-17 16:54:49.839083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.230 [2024-10-17 16:54:49.842544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.230 [2024-10-17 16:54:49.851763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.230 [2024-10-17 16:54:49.852221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.230 [2024-10-17 16:54:49.852250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.230 [2024-10-17 16:54:49.852277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.230 [2024-10-17 16:54:49.852533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.230 [2024-10-17 16:54:49.852776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.230 [2024-10-17 16:54:49.852800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.230 [2024-10-17 16:54:49.852816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.230 [2024-10-17 16:54:49.856285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.231 [2024-10-17 16:54:49.865524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.231 [2024-10-17 16:54:49.865957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.231 [2024-10-17 16:54:49.865986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.231 [2024-10-17 16:54:49.866020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.231 [2024-10-17 16:54:49.866279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.231 [2024-10-17 16:54:49.866522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.231 [2024-10-17 16:54:49.866547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.231 [2024-10-17 16:54:49.866562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.231 [2024-10-17 16:54:49.870072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.231 [2024-10-17 16:54:49.879287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.231 [2024-10-17 16:54:49.879677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.231 [2024-10-17 16:54:49.879714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.231 [2024-10-17 16:54:49.879731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.231 [2024-10-17 16:54:49.879962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.231 [2024-10-17 16:54:49.880205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.231 [2024-10-17 16:54:49.880227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.231 [2024-10-17 16:54:49.880241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.231 [2024-10-17 16:54:49.883653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.231 [2024-10-17 16:54:49.884276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.231 [2024-10-17 16:54:49.884307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.231 [2024-10-17 16:54:49.884335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.231 [2024-10-17 16:54:49.884345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.231 [2024-10-17 16:54:49.884354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.231 [2024-10-17 16:54:49.885881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.231 [2024-10-17 16:54:49.885941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.231 [2024-10-17 16:54:49.885937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.231 [2024-10-17 16:54:49.893176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.231 [2024-10-17 16:54:49.893676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.231 [2024-10-17 16:54:49.893726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.231 [2024-10-17 16:54:49.893747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.231 [2024-10-17 16:54:49.893992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.231 [2024-10-17 16:54:49.894249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.231 [2024-10-17 16:54:49.894274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.231 [2024-10-17 16:54:49.894303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.231 [2024-10-17 16:54:49.897897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.231 [2024-10-17 16:54:49.907147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.231 [2024-10-17 16:54:49.907692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.231 [2024-10-17 16:54:49.907745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.231 [2024-10-17 16:54:49.907768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.231 [2024-10-17 16:54:49.908029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.231 [2024-10-17 16:54:49.908277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.231 [2024-10-17 16:54:49.908312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.231 [2024-10-17 16:54:49.908330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.231 [2024-10-17 16:54:49.911884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.490 [2024-10-17 16:54:49.920832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.490 [2024-10-17 16:54:49.921329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.490 [2024-10-17 16:54:49.921379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.490 [2024-10-17 16:54:49.921400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.490 [2024-10-17 16:54:49.921652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.490 [2024-10-17 16:54:49.921862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.490 [2024-10-17 16:54:49.921883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.490 [2024-10-17 16:54:49.921899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.490 [2024-10-17 16:54:49.925267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.490 [2024-10-17 16:54:49.934385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.490 [2024-10-17 16:54:49.934877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.490 [2024-10-17 16:54:49.934926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.490 [2024-10-17 16:54:49.934947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.490 [2024-10-17 16:54:49.935193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.490 [2024-10-17 16:54:49.935421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.490 [2024-10-17 16:54:49.935443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.490 [2024-10-17 16:54:49.935460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.490 [2024-10-17 16:54:49.938579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.490 [2024-10-17 16:54:49.947817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.490 [2024-10-17 16:54:49.948346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.490 [2024-10-17 16:54:49.948394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.490 [2024-10-17 16:54:49.948415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.490 [2024-10-17 16:54:49.948666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.490 [2024-10-17 16:54:49.948875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.490 [2024-10-17 16:54:49.948896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.490 [2024-10-17 16:54:49.948912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.490 [2024-10-17 16:54:49.952109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.490 [2024-10-17 16:54:49.961337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.490 [2024-10-17 16:54:49.961817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.490 [2024-10-17 16:54:49.961857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.490 [2024-10-17 16:54:49.961880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.490 [2024-10-17 16:54:49.962130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.490 [2024-10-17 16:54:49.962364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.490 [2024-10-17 16:54:49.962386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.490 [2024-10-17 16:54:49.962404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.490 [2024-10-17 16:54:49.965522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.490 [2024-10-17 16:54:49.974721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.490 [2024-10-17 16:54:49.975094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.490 [2024-10-17 16:54:49.975125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.490 [2024-10-17 16:54:49.975143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.490 [2024-10-17 16:54:49.975373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.490 [2024-10-17 16:54:49.975594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.490 [2024-10-17 16:54:49.975616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.490 [2024-10-17 16:54:49.975630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.490 [2024-10-17 16:54:49.978979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.490 [2024-10-17 16:54:49.988248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.490 [2024-10-17 16:54:49.988604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.490 [2024-10-17 16:54:49.988635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.490 [2024-10-17 16:54:49.988652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.490 [2024-10-17 16:54:49.988882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.490 [2024-10-17 16:54:49.989130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.490 [2024-10-17 16:54:49.989154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.490 [2024-10-17 16:54:49.989170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.491 [2024-10-17 16:54:49.992433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.491 [2024-10-17 16:54:50.001851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.491 [2024-10-17 16:54:50.002177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.491 [2024-10-17 16:54:50.002207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.491 [2024-10-17 16:54:50.002225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.491 [2024-10-17 16:54:50.002439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.491 [2024-10-17 16:54:50.002658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.491 [2024-10-17 16:54:50.002683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.491 [2024-10-17 16:54:50.002698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.491 [2024-10-17 16:54:50.005894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.491 [2024-10-17 16:54:50.015337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.491 [2024-10-17 16:54:50.015684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.491 [2024-10-17 16:54:50.015715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.491 [2024-10-17 16:54:50.015734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.491 [2024-10-17 16:54:50.015953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.491 [2024-10-17 16:54:50.016250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.491 [2024-10-17 16:54:50.016291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.491 [2024-10-17 16:54:50.016319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.491 [2024-10-17 16:54:50.019593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.491 [2024-10-17 16:54:50.028964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.491 [2024-10-17 16:54:50.029347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.491 [2024-10-17 16:54:50.029377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.491 [2024-10-17 16:54:50.029394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.491 [2024-10-17 16:54:50.029609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.491 [2024-10-17 16:54:50.029836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.491 [2024-10-17 16:54:50.029859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.491 [2024-10-17 16:54:50.029873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.491 [2024-10-17 16:54:50.030818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.491 [2024-10-17 16:54:50.033133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.491 [2024-10-17 16:54:50.042498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.491 [2024-10-17 16:54:50.042827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.491 [2024-10-17 16:54:50.042859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.491 [2024-10-17 16:54:50.042877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.491 [2024-10-17 16:54:50.043102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.491 [2024-10-17 16:54:50.043339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.491 [2024-10-17 16:54:50.043362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.491 [2024-10-17 16:54:50.043378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.491 [2024-10-17 16:54:50.046616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.491 [2024-10-17 16:54:50.056013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.491 [2024-10-17 16:54:50.056477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.491 [2024-10-17 16:54:50.056514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.491 [2024-10-17 16:54:50.056535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.491 [2024-10-17 16:54:50.056766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.491 [2024-10-17 16:54:50.056975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.491 [2024-10-17 16:54:50.057024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.491 [2024-10-17 16:54:50.057043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.491 [2024-10-17 16:54:50.060310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.491 [2024-10-17 16:54:50.069435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.491 [2024-10-17 16:54:50.069800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.491 [2024-10-17 16:54:50.069831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.491 [2024-10-17 16:54:50.069848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.491 [2024-10-17 16:54:50.070088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.491 [2024-10-17 16:54:50.070300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.491 [2024-10-17 16:54:50.070336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.491 [2024-10-17 16:54:50.070350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.491 [2024-10-17 16:54:50.073513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.491 Malloc0 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.491 [2024-10-17 16:54:50.082925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.491 [2024-10-17 16:54:50.083268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.491 [2024-10-17 16:54:50.083300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.491 [2024-10-17 16:54:50.083318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.491 [2024-10-17 16:54:50.083549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.491 [2024-10-17 16:54:50.083770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.491 [2024-10-17 16:54:50.083792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.491 [2024-10-17 16:54:50.083806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.491 [2024-10-17 16:54:50.087090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.491 [2024-10-17 16:54:50.096589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.491 [2024-10-17 16:54:50.096941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.491 [2024-10-17 16:54:50.096970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b7b00 with addr=10.0.0.2, port=4420 00:26:36.491 [2024-10-17 16:54:50.096988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b7b00 is same with the state(6) to be set 00:26:36.491 [2024-10-17 16:54:50.097217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7b00 (9): Bad file descriptor 00:26:36.491 [2024-10-17 16:54:50.097233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.491 [2024-10-17 16:54:50.097446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.491 [2024-10-17 16:54:50.097469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.491 [2024-10-17 16:54:50.097483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.491 [2024-10-17 16:54:50.100866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.491 16:54:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2469043 00:26:36.491 [2024-10-17 16:54:50.110129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.751 [2024-10-17 16:54:50.186411] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:37.687 3618.33 IOPS, 14.13 MiB/s [2024-10-17T14:54:52.315Z] 4340.29 IOPS, 16.95 MiB/s [2024-10-17T14:54:53.250Z] 4863.38 IOPS, 19.00 MiB/s [2024-10-17T14:54:54.630Z] 5280.00 IOPS, 20.62 MiB/s [2024-10-17T14:54:55.564Z] 5611.30 IOPS, 21.92 MiB/s [2024-10-17T14:54:56.502Z] 5886.73 IOPS, 23.00 MiB/s [2024-10-17T14:54:57.514Z] 6115.33 IOPS, 23.89 MiB/s [2024-10-17T14:54:58.455Z] 6299.92 IOPS, 24.61 MiB/s [2024-10-17T14:54:59.389Z] 6464.14 IOPS, 25.25 MiB/s 00:26:45.699 Latency(us) 00:26:45.699 [2024-10-17T14:54:59.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.699 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:45.699 Verification LBA range: start 0x0 length 0x4000 00:26:45.699 Nvme1n1 : 15.01 6602.47 25.79 8721.60 0.00 8326.33 591.64 23010.42 00:26:45.699 [2024-10-17T14:54:59.389Z] =================================================================================================================== 00:26:45.699 [2024-10-17T14:54:59.389Z] Total : 6602.47 25.79 8721.60 0.00 8326.33 591.64 23010.42 00:26:45.957 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:45.957 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.957 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:45.958 rmmod nvme_tcp 00:26:45.958 rmmod nvme_fabrics 00:26:45.958 rmmod nvme_keyring 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 2469723 ']' 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 2469723 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2469723 ']' 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2469723 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2469723 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2469723' 00:26:45.958 killing process with pid 2469723 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2469723 00:26:45.958 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2469723 00:26:46.216 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:46.216 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:46.216 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:46.216 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:46.216 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:26:46.216 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:46.216 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:26:46.216 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:46.216 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:46.216 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.216 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.216 16:54:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.755 16:55:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:48.755 00:26:48.755 real 0m22.528s 00:26:48.755 user 1m0.641s 00:26:48.755 sys 0m3.997s 00:26:48.755 16:55:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:48.755 16:55:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:48.755 ************************************ 00:26:48.755 END TEST nvmf_bdevperf 00:26:48.755 ************************************ 00:26:48.755 16:55:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:48.755 16:55:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:48.755 16:55:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:48.755 16:55:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.755 ************************************ 00:26:48.755 START TEST nvmf_target_disconnect 00:26:48.755 ************************************ 00:26:48.755 16:55:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:48.755 * Looking for test storage... 00:26:48.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:48.755 16:55:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:48.755 16:55:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:26:48.755 16:55:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:48.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.755 --rc genhtml_branch_coverage=1 00:26:48.755 --rc genhtml_function_coverage=1 00:26:48.755 --rc genhtml_legend=1 00:26:48.755 --rc geninfo_all_blocks=1 00:26:48.755 --rc geninfo_unexecuted_blocks=1 00:26:48.755 00:26:48.755 ' 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:48.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.755 --rc genhtml_branch_coverage=1 00:26:48.755 --rc genhtml_function_coverage=1 00:26:48.755 --rc genhtml_legend=1 00:26:48.755 --rc geninfo_all_blocks=1 00:26:48.755 --rc geninfo_unexecuted_blocks=1 00:26:48.755 00:26:48.755 ' 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:48.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.755 --rc genhtml_branch_coverage=1 00:26:48.755 --rc genhtml_function_coverage=1 00:26:48.755 --rc genhtml_legend=1 00:26:48.755 --rc geninfo_all_blocks=1 00:26:48.755 --rc geninfo_unexecuted_blocks=1 00:26:48.755 00:26:48.755 ' 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:48.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.755 --rc genhtml_branch_coverage=1 00:26:48.755 --rc genhtml_function_coverage=1 00:26:48.755 --rc genhtml_legend=1 00:26:48.755 --rc geninfo_all_blocks=1 00:26:48.755 --rc geninfo_unexecuted_blocks=1 00:26:48.755 00:26:48.755 ' 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.755 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:48.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:48.756 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:50.664 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:50.664 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:50.664 Found net devices under 0000:09:00.0: cvl_0_0 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:50.664 Found net devices under 0000:09:00.1: cvl_0_1 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:50.664 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:50.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:26:50.665 00:26:50.665 --- 10.0.0.2 ping statistics --- 00:26:50.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.665 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:26:50.665 00:26:50.665 --- 10.0.0.1 ping statistics --- 00:26:50.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.665 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:50.665 ************************************ 00:26:50.665 START TEST nvmf_target_disconnect_tc1 00:26:50.665 ************************************ 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:50.665 [2024-10-17 16:55:04.342376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.665 [2024-10-17 16:55:04.342446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64c000 with addr=10.0.0.2, port=4420 00:26:50.665 [2024-10-17 16:55:04.342490] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:50.665 [2024-10-17 16:55:04.342515] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:50.665 [2024-10-17 16:55:04.342530] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:50.665 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:50.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:50.665 Initializing NVMe Controllers 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:50.665 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:50.665 00:26:50.665 real 0m0.096s 00:26:50.665 user 0m0.036s 00:26:50.665 sys 0m0.060s 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:50.924 ************************************ 00:26:50.924 END TEST nvmf_target_disconnect_tc1 00:26:50.924 ************************************ 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:50.924 ************************************ 00:26:50.924 START TEST nvmf_target_disconnect_tc2 00:26:50.924 ************************************ 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2472890 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2472890 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2472890 ']' 00:26:50.924 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.925 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:50.925 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.925 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:50.925 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.925 [2024-10-17 16:55:04.453269] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:50.925 [2024-10-17 16:55:04.453364] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.925 [2024-10-17 16:55:04.516500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.925 [2024-10-17 16:55:04.576308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.925 [2024-10-17 16:55:04.576361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.925 [2024-10-17 16:55:04.576389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.925 [2024-10-17 16:55:04.576405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.925 [2024-10-17 16:55:04.576415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.925 [2024-10-17 16:55:04.577872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:50.925 [2024-10-17 16:55:04.577936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:50.925 [2024-10-17 16:55:04.578017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:50.925 [2024-10-17 16:55:04.578010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.183 Malloc0 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.183 [2024-10-17 16:55:04.752252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.183 [2024-10-17 16:55:04.780532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.183 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2472918 00:26:51.184 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:51.184 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:53.743 16:55:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2472890 00:26:53.743 16:55:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 [2024-10-17 16:55:06.804834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 [2024-10-17 16:55:06.805213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Read completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.743 starting I/O failed 00:26:53.743 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 [2024-10-17 16:55:06.805573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Write completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 Read completed with error (sct=0, sc=8) 00:26:53.744 starting I/O failed 00:26:53.744 [2024-10-17 16:55:06.805892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:53.744 [2024-10-17 16:55:06.806086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.806127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.806238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.806266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.806374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.806400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.806501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.806527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.806670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.806698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.806813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.806839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.806975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.807012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.807130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.807156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.807242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.807267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.807410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.807435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.807515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.807539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.807660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.807702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.807792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.807833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.808030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.808055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.808149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.808174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.808264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.808298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.808423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.808448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.808560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.808585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.808747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.808775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.808896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.808939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.809065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.809091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.809178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.809204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.744 [2024-10-17 16:55:06.809299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.744 [2024-10-17 16:55:06.809324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.744 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.809405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.809430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.809512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.809537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.809663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.809703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.809855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.809896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.810080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.810132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.810224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.810252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.810358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.810384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.810498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.810524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.810656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.810684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.810770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.810799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.810923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.810966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.811084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.811109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.811200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.811225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.811352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.811378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.811516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.811541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.811666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.811694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.811799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.811830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.811958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.812022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.812158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.812207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.812308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.812336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.812455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.812482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.812608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.812634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.812716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.812743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.812859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.812884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.812978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.813025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.813125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.813151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.813245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.813273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.813366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.813392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.813473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.813499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.813590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.813615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.813725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.813769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.813892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.813935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.814045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.814073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.814190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.814216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.814312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.814337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.814420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.814445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.814552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.814578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.814690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.814716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.814808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.814842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.814969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.745 [2024-10-17 16:55:06.814996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.745 qpair failed and we were unable to recover it. 00:26:53.745 [2024-10-17 16:55:06.815135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.815162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.815259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.815285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.815372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.815399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.815535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.815590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.815731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.815758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.815875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.815902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.816040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.816083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.816195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.816222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.816382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.816411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.816532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.816562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.816657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.816688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.816788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.816823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.816961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.816987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.817784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.817810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.817967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.817992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.818117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.818145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.818233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.818259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.818343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.818370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.818585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.818628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.818788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.818817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.819015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.819042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.819233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.819260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.819429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.819455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.819569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.819595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.819682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.819709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.819829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.819856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.819962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.820006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.820129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.820157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.820270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.820295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.820438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.820464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.820557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.820583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.820759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.820785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.820912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.820939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.821043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.821081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.821214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.821242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.821351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.821394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.821511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.821561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.821751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.821780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.821908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.821938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.822083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.822110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.822230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.746 [2024-10-17 16:55:06.822256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.746 qpair failed and we were unable to recover it. 00:26:53.746 [2024-10-17 16:55:06.822342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.822368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.822459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.822485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.822591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.822618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.822745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.822776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.822911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.822940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.823046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.823072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.823162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.823187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.823265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.823291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.823398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.823423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.823511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.823538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.823653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.823684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.823789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.823815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.823930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.823956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.824049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.824075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.824156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.824182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.824291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.824317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.824391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.824416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.824504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.824529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.824631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.824658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.824743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.824785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.824915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.824940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.825052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.825078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.825190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.825216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.825295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.825321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.825478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.825504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.825619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.825648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.825782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.825809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.825929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.825955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.826049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.826075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.826195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.826221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.826364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.826390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.826497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.826523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.826638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.826663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.826755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.826781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.747 [2024-10-17 16:55:06.826878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.747 [2024-10-17 16:55:06.826903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.747 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.826997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.827029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.827153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.827178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.827293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.827326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.827412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.827437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.827548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.827573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.827701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.827744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.827858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.827883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.828027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.828054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.828168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.828194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.828307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.828332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.828422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.828448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.828539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.828568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.828704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.828729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.828825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.828850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.828994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.829030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.829134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.829159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.829243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.829269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.829394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.829422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.829527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.829551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.829658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.829683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.829795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.829820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.829965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.829990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.830114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.830140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.830236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.830262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.830356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.830381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.830473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.830500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.830591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.830617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.830707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.830734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.830819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.830845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.830971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.831011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.831137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.831162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.831253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.831280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.831364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.831389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.831507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.831532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.831649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.831674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.831753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.831778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.831891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.748 [2024-10-17 16:55:06.831916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.748 qpair failed and we were unable to recover it. 00:26:53.748 [2024-10-17 16:55:06.831996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.832027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.832141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.832166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.832253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.832279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.832388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.832413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.832531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.832557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.832669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.832694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.832814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.832840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.832950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.832994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.833149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.833177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.833268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.833295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.833445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.833476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.833605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.833632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.833722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.833749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.833831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.833857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.833935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.833961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.834051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.834077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.834210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.834235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.834322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.834349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.834464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.834489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.834674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.834743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.834846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.834873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.834962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.834989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.835110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.835136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.835211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.835237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.835347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.835373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.835449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.835476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.835565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.835590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.835704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.835729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.835857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.835885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.836048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.836074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.836226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.836265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.836390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.836417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.836532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.836558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.836680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.836706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.836836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.749 [2024-10-17 16:55:06.836876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.749 qpair failed and we were unable to recover it. 00:26:53.749 [2024-10-17 16:55:06.836995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.837033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.837148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.837176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.837373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.837403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.837533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.837560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.837655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.837682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.837794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.837820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.837907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.837934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.838051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.838078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.838189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.838215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.838293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.838318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.838424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.838449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.838546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.838572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.838653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.838679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.838790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.838816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.838929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.838955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.839042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.839068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.839182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.839207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.839341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.839372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.839504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.839530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.839619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.839646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.839739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.839764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.839895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.839923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.840065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.840092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.840165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.840190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.840305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.840335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.840419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.840444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.840562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.840607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.840714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.840740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.840849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.840874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.841012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.841056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.841162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.841187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.841272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.841298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.841382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.841408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.841499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.841525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.841665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.841691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.841824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.841852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.841958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.750 [2024-10-17 16:55:06.841983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.750 qpair failed and we were unable to recover it. 00:26:53.750 [2024-10-17 16:55:06.842100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.842126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.842268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.842295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.842404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.842429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.842516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.842543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.842642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.842669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.842745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.842770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.842877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.842903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.843025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.843051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.843137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.843163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.843268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.843294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.843370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.843395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.843510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.843535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.843619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.843644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.843730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.843757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.843870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.843900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.844018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.844059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.844169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.844194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.844281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.844307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.844395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.844421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.844529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.844556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.844666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.844691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.844901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.844959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.845068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.845113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.845203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.845229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.845339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.845365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.845504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.845531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.845669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.845695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.845784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.845809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.845928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.845954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.846036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.846062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.846155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.846180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.846329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.846358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.846467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.846493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.846605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.846631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.846763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.846791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.846906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.751 [2024-10-17 16:55:06.846932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.751 qpair failed and we were unable to recover it. 00:26:53.751 [2024-10-17 16:55:06.847058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.847097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.847270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.847300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.847419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.847445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.847528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.847554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.847668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.847694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.847796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.847835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.847933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.847960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.848095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.848124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.848243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.848271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.848438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.848492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.848577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.848603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.848718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.848744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.848826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.848853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.848982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.849029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.849125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.849152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.849268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.849293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.849392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.849422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.849546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.849574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.849728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.849756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.849898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.849924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.850048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.850088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.850205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.850232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.850370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.850399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.850499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.850527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.850619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.850648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.850737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.850766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.850885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.752 [2024-10-17 16:55:06.850913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.752 qpair failed and we were unable to recover it. 00:26:53.752 [2024-10-17 16:55:06.851014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.851062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.851152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.851179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.851304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.851331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.851441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.851472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.851626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.851655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.851786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.851816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.851944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.851974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.852135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.852174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.852275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.852303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.852413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.852439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.852514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.852541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.852625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.852650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.852756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.852782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.852864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.852891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.852988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.853027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.853109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.853136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.853253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.853279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.853395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.853421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.853583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.853627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.853727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.853754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.853849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.853876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.853966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.853992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.854095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.854123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.854231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.854281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.854406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.854435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.854540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.854565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.854677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.854704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.854799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.854824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.854931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.854956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.855055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.753 [2024-10-17 16:55:06.855082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.753 qpair failed and we were unable to recover it. 00:26:53.753 [2024-10-17 16:55:06.855198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.855223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.855302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.855328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.855452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.855479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.855674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.855702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.855810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.855837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.856028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.856055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.856157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.856185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.856335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.856364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.856484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.856513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.856623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.856651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.856745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.856771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.856853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.856880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.856983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.857019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.857138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.857164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.857262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.857289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.857408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.857435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.857550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.857576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.857692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.857720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.857810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.857835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.857955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.857980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.858080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.858105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.858194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.858220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.858336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.858362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.858472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.858499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.858591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.858617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.858732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.858757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.858874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.858899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.859017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.859046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.859134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.859165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.859254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.859282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.859375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.859402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.859484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.859510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.859624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.859651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.859742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.859768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.859861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.859901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.859998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.860034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.860150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.860176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.860254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.860280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.860393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.860418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.754 qpair failed and we were unable to recover it. 00:26:53.754 [2024-10-17 16:55:06.860500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.754 [2024-10-17 16:55:06.860525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.860606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.860648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.860782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.860807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.860929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.860954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.861054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.861082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.861204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.861246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.861395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.861423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.861520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.861549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.861644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.861672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.861771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.861813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.861965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.861991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.862144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.862172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.862266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.862294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.862390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.862419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.862592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.862644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.862792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.862834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.862918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.862947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.863066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.863095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.863180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.863206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.863343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.863390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.863579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.863605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.863718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.863744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.863830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.863855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.863970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.864016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.864109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.864135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.864245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.864271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.864425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.864453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.864600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.864628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.864716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.864744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.864883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.864909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.865041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.865067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.865159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.865184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.865263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.865292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.865407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.865432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.865525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.865550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.865685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.865713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.865838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.755 [2024-10-17 16:55:06.865866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.755 qpair failed and we were unable to recover it. 00:26:53.755 [2024-10-17 16:55:06.865979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.866009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.866120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.866145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.866227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.866252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.866368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.866392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.866483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.866511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.866687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.866713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.866804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.866835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.866936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.866962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.867090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.867117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.867255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.867299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.867450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.867476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.867590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.867616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.867703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.867730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.867844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.867870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.868013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.868053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.868185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.868225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.868383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.868447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.868600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.868654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.868808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.868851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.868967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.868993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.869120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.869146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.869277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.869306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.869502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.869530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.869610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.869639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.869747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.869776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.869918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.869944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.870091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.870129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.870221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.870248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.870343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.870370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.870454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.870479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.870585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.870610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.870697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.870722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.870810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.870836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.870977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.871034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.756 [2024-10-17 16:55:06.871146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.756 [2024-10-17 16:55:06.871175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.756 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.871267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.871295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.871387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.871415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.871574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.871603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.871740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.871772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.871900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.871943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.872032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.872058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.872149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.872176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.872264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.872290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.872384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.872409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.872495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.872523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.872605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.872631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.872720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.872751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.872919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.872947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.873067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.873095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.873186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.873211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.873324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.873353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.873453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.873481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.873656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.873681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.873799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.873825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.873938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.873964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.874071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.874110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.874207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.874235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.874355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.874381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.874537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.874565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.874666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.874691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.874835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.874863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.874981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.875016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.875135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.875174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.875332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.875386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.875557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.875587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.875721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.757 [2024-10-17 16:55:06.875748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.757 qpair failed and we were unable to recover it. 00:26:53.757 [2024-10-17 16:55:06.875867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.875893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.875976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.876011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.876126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.876151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.876281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.876320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.876430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.876468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.876566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.876593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.876700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.876728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.876851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.876897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.877028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.877068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.877190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.877218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.877337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.877362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.877505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.877555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.877694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.877720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.877800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.877825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.877947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.877974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.878112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.878151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.878272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.878301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.878412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.878438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.878581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.878606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.878691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.878717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.878797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.878823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.878924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.878963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.879057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.879085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.879225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.879253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.879381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.879435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.879550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.879579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.879728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.879758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.879848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.879876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.880021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.880048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.880162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.880187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.880300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.880326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.880463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.880492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.880647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.880675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.880824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.880852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.880946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.881011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.881127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.881153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.881261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.881286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.881360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.758 [2024-10-17 16:55:06.881385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.758 qpair failed and we were unable to recover it. 00:26:53.758 [2024-10-17 16:55:06.881515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.881543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.881657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.881685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.881778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.881806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.881916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.881941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.882031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.882057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.882133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.882159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.882280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.882342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.882523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.882550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.882690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.882716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.882826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.882851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.882945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.882972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.883086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.883127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.883271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.883299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.883389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.883415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.883533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.883558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.883640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.883666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.883750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.883775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.883856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.883882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.883992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.884036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.884139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.884178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.884330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.884358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.884522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.884576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.884734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.884787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.884930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.884958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.885055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.885082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.885167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.885192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.885271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.885316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.885474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.885534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.885654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.885682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.885829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.885857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.885963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.885988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.886112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.886138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.886251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.886276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.886401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.886429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.886549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.886578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.886683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.886708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.886811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.886840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.886995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.887030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.887132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.887158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.759 [2024-10-17 16:55:06.887300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.759 [2024-10-17 16:55:06.887325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.759 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.887461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.887489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.887606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.887633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.887756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.887784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.887878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.887919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.888054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.888093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.888207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.888235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.888381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.888424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.888552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.888581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.888768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.888796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.888898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.888928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.889043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.889086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.889175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.889201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.889320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.889346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.889450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.889478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.889587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.889631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.889727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.889756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.889883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.889913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.890035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.890074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.890284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.890324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.890466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.890496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.890593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.890635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.890753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.890780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.890870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.890897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.891010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.891037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.891163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.891188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.891340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.891393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.891505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.891564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.891714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.891763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.891853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.891894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.891986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.892018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.892130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.892155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.892242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.892267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.892398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.892427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.892611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.892639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.892768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.892796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.892925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.892963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.760 qpair failed and we were unable to recover it. 00:26:53.760 [2024-10-17 16:55:06.893100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.760 [2024-10-17 16:55:06.893129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.893257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.893315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.893528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.893575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.893685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.893728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.893888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.893914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.894012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.894038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.894154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.894179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.894317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.894342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.894493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.894518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.894626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.894651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.894737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.894764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.894857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.894883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.894967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.894992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.895111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.895137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.895212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.895237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.895388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.895433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.895586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.895615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.895702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.895730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.895846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.895874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.896038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.896064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.896150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.896175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.896268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.896293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.896378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.896403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.896524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.896549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.896708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.896736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.896885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.896912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.897013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.897042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.897138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.897163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.897280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.897307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.897397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.897428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.897514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.897540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.897633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.897658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.897763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.897822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.897945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.897983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.898110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.898138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.898228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.898254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.898333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.898357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.898468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.761 [2024-10-17 16:55:06.898494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.761 qpair failed and we were unable to recover it. 00:26:53.761 [2024-10-17 16:55:06.898588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.898614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.898714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.898743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.898894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.898919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.899052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.899079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.899189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.899229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.899336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.899366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.899497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.899529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.899626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.899657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.899804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.899862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.899954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.899981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.900074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.900100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.900211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.900236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.900342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.900384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.900525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.900566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.900699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.900727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.900826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.900853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.900972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.901007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.901143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.901171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.901319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.901376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.901512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.901558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.901702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.901730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.901839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.901865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.901981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.902015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.902120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.902150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.902256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.902284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.902404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.902432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.902557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.902586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.902737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.902765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.902885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.902913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.903055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.762 [2024-10-17 16:55:06.903082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.762 qpair failed and we were unable to recover it. 00:26:53.762 [2024-10-17 16:55:06.903193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.903236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.903360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.903388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.903481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.903510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.903599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.903627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.903721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.903749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.903877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.903905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.904012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.904053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.904154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.904201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.904329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.904360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.904511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.904540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.904703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.904729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.904822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.904848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.904943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.904969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.905069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.905097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.905195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.905221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.905409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.905460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.905589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.905642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.905725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.905750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.905843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.905868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.905956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.905982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.906109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.906136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.906276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.906307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.906429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.906458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.906607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.906636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.906747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.906793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.906896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.906922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.907079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.907109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.907212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.907246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.907373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.907401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.907520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.907548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.907640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.907671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.907799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.907829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.907961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.907986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.908083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.908109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.908208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.908234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.908313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.908340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.908451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.763 [2024-10-17 16:55:06.908477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.763 qpair failed and we were unable to recover it. 00:26:53.763 [2024-10-17 16:55:06.908591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.908616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.908700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.908728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.908819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.908844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.908940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.908966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.909093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.909119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.909207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.909234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.909349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.909375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.909489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.909515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.909607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.909635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.909734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.909772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.909921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.909949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.910031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.910058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.910143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.910169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.910279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.910304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.910451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.910477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.910587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.910612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.910742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.910770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.910879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.910907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.911114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.911146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.911346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.911376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.911503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.911533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.911693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.911721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.911816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.911846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.911969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.912006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.912138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.912167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.912301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.912343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.912491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.912520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.912655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.912681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.912817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.912856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.913027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.913086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.913204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.913238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.913390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.913418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.913541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.913583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.913745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.764 [2024-10-17 16:55:06.913797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.764 qpair failed and we were unable to recover it. 00:26:53.764 [2024-10-17 16:55:06.913878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.913903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.913990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.914021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.914138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.914162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.914248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.914273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.914360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.914385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.914504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.914529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.914614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.914640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.914786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.914811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.914928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.914954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.915084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.915112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.915231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.915258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.915368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.915394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.915503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.915529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.915656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.915695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.915826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.915865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.915984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.916016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.916124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.916151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.916268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.916297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.916451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.916476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.916563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.916589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.916675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.916700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.916794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.916819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.916902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.916929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.917066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.917111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.917216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.917255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.917404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.917431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.917569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.917619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.917755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.917809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.917942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.917969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.918060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.918086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.918179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.918205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.918304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.918332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.918445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.918473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.765 [2024-10-17 16:55:06.918594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.765 [2024-10-17 16:55:06.918623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.765 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.918798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.918843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.918936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.918962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.919074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.919113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.919269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.919315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.919412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.919440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.919545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.919575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.919717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.919760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.919891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.919917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.920014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.920040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.920120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.920145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.920258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.920284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.920388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.920414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.920511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.920539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.920655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.920680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.920799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.920837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.920953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.920979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.921138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.921182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.921349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.921380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.921504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.921549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.921663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.921690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.921801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.921831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.921933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.921958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.922055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.922082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.922177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.922203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.922302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.922351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.922529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.922575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.922728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.922772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.922912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.922941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.923065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.923093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.923207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.923238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.923358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.923384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.923511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.923539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.923643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.923675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.923802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.923831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.924007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.924036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.924133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.924160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.924249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.924275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.766 [2024-10-17 16:55:06.924485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.766 [2024-10-17 16:55:06.924541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.766 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.924656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.924706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.924790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.924816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.924897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.924924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.925053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.925081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.925181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.925219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.925338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.925365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.925553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.925603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.925747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.925802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.925891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.925932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.926022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.926048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.926168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.926193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.926271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.926314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.926414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.926439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.926573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.926601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.926721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.926749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.926892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.926931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.927047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.927086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.927207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.927234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.927358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.927393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.927529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.927557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.927739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.927779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.927980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.928014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.928158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.928185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.928272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.928317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.928513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.928539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.928628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.928671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.928794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.928825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.928916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.928946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.929066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.929093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.767 [2024-10-17 16:55:06.929202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.767 [2024-10-17 16:55:06.929228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.767 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.929372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.929403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.929528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.929556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.929709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.929737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.929858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.929886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.930022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.930048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.930136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.930164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.930304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.930356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.930562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.930612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.930724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.930764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.930896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.930925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.931053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.931079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.931157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.931183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.931321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.931351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.931455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.931497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.931647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.931675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.931828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.931875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.931998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.932049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.932163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.932189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.932389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.932419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.932544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.932588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.932742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.932796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.932924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.932954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.933122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.933149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.933262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.933305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.933444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.933470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.933582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.933608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.933748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.933791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.933923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.933953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.934097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.934128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.934242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.934268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.934374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.934400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.934526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.934552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.934708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.934751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.934890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.934933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.935124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.935152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.935248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.768 [2024-10-17 16:55:06.935274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.768 qpair failed and we were unable to recover it. 00:26:53.768 [2024-10-17 16:55:06.935426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.935454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.935564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.935604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.935718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.935746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.935886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.935911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.936014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.936040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.936132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.936158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.936250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.936278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.936390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.936419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.936532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.936559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.936653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.936679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.936792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.936818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.936904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.936930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.937056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.937083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.937176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.937202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.937322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.937347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.937484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.937513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.937608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.937637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.937727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.937756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.937904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.937932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.938097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.938142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.938275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.938315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.938433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.938477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.938573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.938604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.938718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.938761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.938858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.938888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.938984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.939022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.939127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.939153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.939231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.939256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.939340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.939365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.939573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.939601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.939734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.939765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.939872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.939913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.940069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.940095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.769 [2024-10-17 16:55:06.940196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.769 [2024-10-17 16:55:06.940222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.769 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.940424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.940450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.940616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.940669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.940792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.940820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.940911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.940940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.941045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.941071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.941184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.941209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.941313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.941338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.941446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.941472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.941566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.941597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.941727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.941759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.941915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.941954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.942104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.942130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.942246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.942293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.942423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.942451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.942575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.942617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.942700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.942728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.942827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.942869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.942955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.942980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.943099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.943126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.943252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.943294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.943422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.943450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.943610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.943639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.943758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.943787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.943893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.943919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.944032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.944058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.944171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.944196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.944284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.944310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.944398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.944424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.944533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.944576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.944671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.944697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.944807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.944837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.944954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.944998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.945139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-10-17 16:55:06.945178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.770 qpair failed and we were unable to recover it. 00:26:53.770 [2024-10-17 16:55:06.945316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.945345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.945466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.945494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.945614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.945642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.945765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.945793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.945885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.945913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.946077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.946106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.946230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.946260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.946371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.946416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.946526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.946555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.946709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.946735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.946820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.946846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.946960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.946986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.947092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.947120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.947206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.947232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.947343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.947369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.947475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.947501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.947600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.947639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.947752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.947779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.947864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.947889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.947994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.948042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.948185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.948213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.948306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.948334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.948417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.948444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.948607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.948662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.948777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.948803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.948885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.948914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.949023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.949051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.949167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.949193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.949310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.949336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.949463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.949489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.949575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.949602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.949731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.949776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.949868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.949907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.950057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.950085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.950217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.950245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.771 [2024-10-17 16:55:06.950369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-10-17 16:55:06.950397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.771 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.950495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.950523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.950639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.950667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.950817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.950863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.951012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.951038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.951148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.951174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.951280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.951309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.951410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.951438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.951569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.951613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.951725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.951751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.951831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.951856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.951960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.951990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.952106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.952135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.952257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.952285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.952455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.952492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.952689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.952734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.952871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.952897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.953018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.953044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.953154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.953183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.953291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.953317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.953426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.953469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.953561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.953588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.953679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.953705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.953818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.953843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.953931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.953958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.954091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.954118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.954205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.954231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.954326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.954355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.954479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.954507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.954606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.954634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.772 qpair failed and we were unable to recover it. 00:26:53.772 [2024-10-17 16:55:06.954765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-10-17 16:55:06.954790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.954918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.954978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.955104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.955134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.955255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.955283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.955396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.955425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.955546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.955574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.955728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.955775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.955866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.955892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.955969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.956006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.956146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.956190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.956329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.956358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.956511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.956557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.956638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.956665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.956754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.956782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.956868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.956893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.957014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.957059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.957182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.957211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.957357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.957408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.957588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.957637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.957758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.957800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.957907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.957932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.958057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.958096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.958240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.958270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.958424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.958453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.958547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.958575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.958724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.958769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.958854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.958880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.958980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.959012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.959151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.959195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.959281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.959308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.959416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.959443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.959539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.959564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.959679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.959705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.959787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.959816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.773 [2024-10-17 16:55:06.959934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.773 [2024-10-17 16:55:06.959960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.773 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.960084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.960109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.960198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.960224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.960360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.960388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.960511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.960540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.960629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.960658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.960854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.960913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.961028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.961055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.961180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.961225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.961366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.961410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.961574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.961617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.961733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.961760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.961877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.961903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.962029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.962058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.962160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.962194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.962286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.962314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.962396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.962424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.962522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.962552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.962662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.962690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.962782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.962808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.962924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.962950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.963061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.963086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.963187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.963216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.963344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.963374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.963524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.963575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.963694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.963724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.963855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.963881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.963973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.963999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.964103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.964128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.964251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.964279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.964386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.964415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.964511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.964541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.964669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.964700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.964832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.964858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.964973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.964998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.965133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.965163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.965284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.965313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.774 qpair failed and we were unable to recover it. 00:26:53.774 [2024-10-17 16:55:06.965451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.774 [2024-10-17 16:55:06.965477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.965569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.965595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.965731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.965756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.965862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.965888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.965972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.966009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.966123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.966149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.966227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.966253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.966365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.966390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.966472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.966497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.966580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.966605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.966735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.966764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.966877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.966906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.967064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.967104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.967207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.967239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.967334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.967363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.967484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.967514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.967662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.967692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.967824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.967851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.967945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.967971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.968071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.968099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.968218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.968246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.968367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.968395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.968489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.968517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.968642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.968670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.968819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.968864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.969020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.969047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.969171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.969215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.969299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.969326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.969489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.969517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.969648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.969691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.969827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.969857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.969968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.969998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.970181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.970210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.970324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.970351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.970463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.970490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.970597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.970642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.775 [2024-10-17 16:55:06.970754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.775 [2024-10-17 16:55:06.970782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.775 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.970914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.970955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.971086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.971115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.971247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.971277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.971457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.971507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.971602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.971632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.971839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.971867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.971963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.971992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.972110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.972136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.972244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.972273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.972428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.972456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.972622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.972680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.972792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.972821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.973008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.973048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.973171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.973199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.973345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.973389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.973534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.973582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.973689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.973738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.973836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.973862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.973974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.974009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.974136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.974178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.974318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.974361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.974516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.974555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.974688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.974713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.974831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.974857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.974943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.974969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.975088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.975113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.975229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.975255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.975337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.975361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.975474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.975500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.975612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.975638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.975748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.975774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.975873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.975913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.976024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.776 [2024-10-17 16:55:06.976063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.776 qpair failed and we were unable to recover it. 00:26:53.776 [2024-10-17 16:55:06.976181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.976207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.976291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.976321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.976408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.976433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.976544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.976569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.976681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.976711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.976852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.976882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.976988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.977036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.977126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.977153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.977290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.977319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.977447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.977475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.977658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.977706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.977798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.977826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.977929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.977961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.978083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.978111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.978197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.978225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.978332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.978360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.978518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.978563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.978724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.978774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.978858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.978884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.978971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.978997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.979141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.979167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.979335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.979383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.979508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.979558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.979723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.979753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.979856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.979881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.979993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.980023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.980102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.980127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.980223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.980248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.980333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.980364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.980493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.980521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.980632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.980658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.980790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.980819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.980938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.980966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.981077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.981103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.981232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.981271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.777 qpair failed and we were unable to recover it. 00:26:53.777 [2024-10-17 16:55:06.981374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.777 [2024-10-17 16:55:06.981412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.981552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.981582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.981709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.981738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.981861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.981890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.981994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.982030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.982117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.982143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.982258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.982299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.982418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.982444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.982553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.982583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.982687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.982716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.982887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.982943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.983067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.983095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.983210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.983236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.983325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.983352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.983463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.983493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.983588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.983615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.983713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.983741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.983892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.983916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.984042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.984082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.984234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.984261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.984425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.984459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.984584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.984613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.984758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.984786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.984911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.984951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.985086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.985115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.985205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.985232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.985380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.985429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.985569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.985616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.985752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.985778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.985856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.985882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.985990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.986023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.986133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.986159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.986263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.986291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.986433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.986483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.986642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.986692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.986848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.986875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.778 [2024-10-17 16:55:06.986968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.778 [2024-10-17 16:55:06.986996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.778 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.987108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.987134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.987226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.987251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.987390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.987415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.987574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.987625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.987776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.987804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.987934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.987977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.988089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.988117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.988211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.988252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.988350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.988378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.988480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.988507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.988646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.988704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.988848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.988875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.989015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.989041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.989124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.989149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.989256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.989284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.989397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.989425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.989507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.989535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.989623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.989651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.989771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.989798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.989895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.989927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.990084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.990110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.990230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.990273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.990369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.990397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.990521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.990549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.990649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.990679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.990775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.990804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.990915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.990945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.991040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.991066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.991146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.991173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.991310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.991351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.991461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.991491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.991595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.991622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.991746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.991772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.779 [2024-10-17 16:55:06.991855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.779 [2024-10-17 16:55:06.991879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.779 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.991962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.991986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.992082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.992105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.992215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.992241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.992345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.992372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.992468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.992495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.992621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.992648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.992804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.992828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.992909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.992932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.993076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.993114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.993255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.993285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.993427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.993455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.993599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.993627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.993734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.993760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.993836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.993860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.993957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.993982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.994081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.994106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.994220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.994251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.994379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.994403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.994513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.994538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.994619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.994642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.994732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.994757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.994852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.994891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.995032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.995071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.995169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.995197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.995291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.995316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.995403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.995429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.995549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.995574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.995679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.995711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.995837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.995863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.995977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.996010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.996106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.996143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.996230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.996256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.996382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.996408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.996523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.996552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.996707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.996736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.996856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.996881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.996995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.780 [2024-10-17 16:55:06.997027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.780 qpair failed and we were unable to recover it. 00:26:53.780 [2024-10-17 16:55:06.997117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.997143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.997252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.997300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.997445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.997473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.997597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.997626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.997788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.997847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.997968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.997996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.998098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.998128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.998214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.998240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.998377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.998407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.998496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.998526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.998656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.998687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.998866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.998924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.999062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.999102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.999199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.999227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.999364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.999394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.999531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.999577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.999738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.999799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:06.999923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:06.999954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.000079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.000105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.000196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.000221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.000361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.000387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.000501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.000551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.000681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.000709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.000831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.000861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.001013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.001053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.001153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.001181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.001291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.001318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.001444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.001473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.001577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.001605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.001702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.001732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.001869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.001913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.002070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.781 [2024-10-17 16:55:07.002110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.781 qpair failed and we were unable to recover it. 00:26:53.781 [2024-10-17 16:55:07.002230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.002258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.002431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.002488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.002610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.002657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.002741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.002766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.002896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.002935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.003050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.003080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.003171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.003197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.003320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.003348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.003462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.003497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.003616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.003644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.003736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.003765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.003894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.003923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.004019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.004060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.004182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.004211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.004333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.004362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.004494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.004523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.004617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.004650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.004770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.004799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.004889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.004917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.005025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.005052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.005162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.005188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.005316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.005345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.005494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.005523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.005621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.005649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.005770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.005798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.005910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.005950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.006062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.006090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.006191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.006221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.006381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.006407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.006498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.006524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.006628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.006657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.006810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.006835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.006914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.006939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.007026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.007052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.782 [2024-10-17 16:55:07.007162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.782 [2024-10-17 16:55:07.007192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.782 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.007317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.007346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.007436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.007465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.007582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.007611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.007696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.007724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.007808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.007837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.007944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.007972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.008078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.008105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.008217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.008245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.008347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.008374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.008491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.008516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.008606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.008633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.008735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.008762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.008875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.008915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.009053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.009094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.009183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.009211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.009291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.009320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.009436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.009463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.009571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.009602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.009730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.009774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.009893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.009920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.010036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.010062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.010191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.010220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.010340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.010368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.010485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.010513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.010668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.010715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.010810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.010849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.010943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.010972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.011088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.011119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.011210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.011253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.011365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.011391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.011500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.011526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.011642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.011668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.011786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.011813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.011919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.011951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.012027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.012054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.012139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.012165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.012270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.783 [2024-10-17 16:55:07.012298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.783 qpair failed and we were unable to recover it. 00:26:53.783 [2024-10-17 16:55:07.012382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.012411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.012537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.012567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.012689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.012718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.012856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.012882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.012962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.012987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.013081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.013107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.013216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.013244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.013374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.013402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.013516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.013545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.013632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.013661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.013770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.013799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.013912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.013940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.014076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31ff0 is same with the state(6) to be set 00:26:53.784 [2024-10-17 16:55:07.014228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.014267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.014418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.014449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.014597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.014628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.014730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.014759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.014879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.014908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.015024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.015051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.015141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.015170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.015273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.015316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.015409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.015437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.015563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.015597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.015688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.015716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.015886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.015944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.016067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.016095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.016209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.016235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.016346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.016375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.016479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.016509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.016616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.016645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.016768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.016798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.016935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.016964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.017099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.017127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.017236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.017264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.017466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.017514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.784 [2024-10-17 16:55:07.017626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.784 [2024-10-17 16:55:07.017676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.784 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.017796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.017822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.017940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.017972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.018099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.018126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.018236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.018265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.018390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.018419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.018539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.018568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.018724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.018770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.018913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.018938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.019037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.019063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.019175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.019220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.019324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.019352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.019504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.019535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.019686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.019736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.019870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.019897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.019982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.020016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.020113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.020139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.020221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.020248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.020358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.020384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.020500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.020526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.020670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.020696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.020807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.020831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.020944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.020984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.021098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.021138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.021280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.021307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.021416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.021441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.021555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.021581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.021665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.021691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.021807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.021840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.021948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.021987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.022096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.022124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.022221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.022248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.022368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.022394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.022503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.022532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.022647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.022677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.022793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.022820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.022929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.022955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.785 [2024-10-17 16:55:07.023049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.785 [2024-10-17 16:55:07.023096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.785 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.023187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.023215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.023309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.023337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.023421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.023450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.023573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.023602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.023693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.023730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.023824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.023868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.023952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.023977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.024090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.024119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.024234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.024262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.024400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.024427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.024507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.024533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.024614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.024642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.024783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.024839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.024954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.024982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.025131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.025160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.025278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.025307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.025434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.025462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.025572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.025600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.025726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.025756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.025879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.025909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.026044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.026070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.026197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.026226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.026315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.026346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.026444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.026474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.026573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.026602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.026746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.026791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.026902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.026927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.027053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.027084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.027194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.027219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.027301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.027327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.027443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.027469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.786 [2024-10-17 16:55:07.027549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.786 [2024-10-17 16:55:07.027585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.786 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.027671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.027696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.027859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.027903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.027998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.028037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.028149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.028176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.028284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.028310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.028393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.028418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.028509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.028537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.028639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.028690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.028778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.028803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.028900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.028926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.029063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.029092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.029250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.029278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.029438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.029465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.029552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.029579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.029653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.029678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.029804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.029844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.029940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.029968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.030093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.030120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.030212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.030241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.030351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.030401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.030499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.030528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.030635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.030663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.030746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.030772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.030912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.030939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.031028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.031056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.031147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.031173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.031286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.031317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.031440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.031467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.031567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.031606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.031708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.031735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.031862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.031889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.032011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.032039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.032124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.032151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.032256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.032284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.032365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.032394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.032490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.032519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.032623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.032651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.032743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.032787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.032899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.032924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.033015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.787 [2024-10-17 16:55:07.033044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.787 qpair failed and we were unable to recover it. 00:26:53.787 [2024-10-17 16:55:07.033148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.033176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.033320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.033364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.033469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.033515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.033681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.033712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.033811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.033853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.033972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.034005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.034096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.034122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.034213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.034243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.034335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.034361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.034508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.034537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.034635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.034664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.034780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.034808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.034941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.034969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.035096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.035128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.035263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.035309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.035442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.035489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.035614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.035658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.035753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.035792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.035914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.035940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.036086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.036112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.036219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.036245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.036341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.036366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.036479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.036504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.036579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.036604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.036706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.036746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.036834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.036861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.036990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.037029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.037127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.037155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.037259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.037288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.037441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.037483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.037637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.037666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.037790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.037819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.037927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.037956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.038061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.038088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.038227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.038253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.038390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.038418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.038582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.038610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.038721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.038747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.038850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.038878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.788 [2024-10-17 16:55:07.039016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.788 [2024-10-17 16:55:07.039043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.788 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.039248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.039289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.039461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.039487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.039619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.039648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.039772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.039800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.039926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.039954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.040102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.040142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.040283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.040329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.040442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.040486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.040630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.040674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.040787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.040814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.040922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.040949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.041055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.041081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.041200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.041230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.041382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.041432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.041595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.041639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.041749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.041774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.041857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.041883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.041998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.042061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.042191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.042221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.042379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.042409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.042513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.042562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.042734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.042762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.042885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.042914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.043069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.043099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.043242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.043272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.043418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.043448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.043554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.043583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.043749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.043776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.043887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.043913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.044028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.044073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.044179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.044207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.044344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.044372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.044472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.044497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.044608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.044633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.044720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.044746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.044861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.044887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.044993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.045024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.045109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.045135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.045224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.045251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.045386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.045431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.789 qpair failed and we were unable to recover it. 00:26:53.789 [2024-10-17 16:55:07.045551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.789 [2024-10-17 16:55:07.045585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.045736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.045779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.045870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.045896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.046011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.046038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.046150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.046176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.046318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.046343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.046458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.046483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.046589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.046614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.046697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.046723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.046825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.046863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.046961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.046988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.047117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.047144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.047236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.047262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.047401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.047447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.047546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.047573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.047730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.047760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.047866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.047895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.047991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.048040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.048134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.048159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.048237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.048262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.048382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.048410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.048560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.048589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.048713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.048743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.048852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.048881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.049036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.049077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.049187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.049216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.049366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.049408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.049572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.049626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.049749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.049777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.049902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.049930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.050072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.050112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.050254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.050285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.050391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.050416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.050527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.050556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.050693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.050721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.050826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.050854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.050975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.051012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.790 [2024-10-17 16:55:07.051121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.790 [2024-10-17 16:55:07.051150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.790 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.051276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.051305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.051408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.051437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.051558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.051587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.051683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.051711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.051809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.051851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.051983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.052041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.052146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.052173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.052272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.052302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.052490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.052533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.052661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.052705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.052819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.052858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.052978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.053010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.053096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.053121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.053211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.053239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.053365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.053393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.053541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.053569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.053752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.053799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.053902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.053928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.054091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.054121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.054231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.054260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.054371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.054399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.054526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.054552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.054671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.054697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.054790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.054816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.054930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.054956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.055090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.055117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.055220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.055258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.055388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.055416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.055534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.055561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.055687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.055718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.055808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.055834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.055919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.055944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.056081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.056110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.056202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.791 [2024-10-17 16:55:07.056231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.791 qpair failed and we were unable to recover it. 00:26:53.791 [2024-10-17 16:55:07.056355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.056384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.056505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.056533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.056655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.056683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.056790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.056817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.056916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.056941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.057050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.057076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.057167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.057196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.057305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.057336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.057504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.057532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.057666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.057695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.057823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.057852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.057989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.058019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.058132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.058157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.058237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.058264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.058389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.058433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.058550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.058580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.058704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.058733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.058874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.058903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.059032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.059058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.059138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.059164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.059252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.059295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.059402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.059442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.059560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.059593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.059714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.059742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.059841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.059870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.060025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.060068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.060179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.060204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.060285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.060311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.060423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.060449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.060557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.060585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.060683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.060725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.060886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.060914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.061052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.792 [2024-10-17 16:55:07.061078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.792 qpair failed and we were unable to recover it. 00:26:53.792 [2024-10-17 16:55:07.061166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.061210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.061312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.061341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.061435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.061469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.061573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.061602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.061719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.061762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.061863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.061892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.062017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.062057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.062182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.062209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.062294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.062319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.062450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.062479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.062616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.062660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.062765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.062793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.062909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.062938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.063068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.063094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.063179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.063205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.063304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.063349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.063467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.063501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.063615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.063643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.063770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.063803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.063969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.064016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.064110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.064139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.064275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.064319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.064455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.064487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.064611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.064655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.064772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.064799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.064893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.064921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.065035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.065062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.065174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.065202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.065323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.065349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.065462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.065490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.065613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.065639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.065756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.065782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.065861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.065887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.066007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.066033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.066119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.066145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.066230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.066256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.066357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.066382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.066493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.066519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.066600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.066626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.066732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.066763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.793 [2024-10-17 16:55:07.066881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.793 [2024-10-17 16:55:07.066919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.793 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.067047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.067074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.067154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.067197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.067292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.067327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.067449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.067478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.067565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.067594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.067696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.067725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.067822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.067850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.067968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.067996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.068179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.068209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.068330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.068358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.068443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.068471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.068596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.068624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.068720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.068748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.068864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.068892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.069040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.069069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.069189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.069214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.069347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.069392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.069529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.069558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.069706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.069751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.069872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.069900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.070020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.070047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.070131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.070156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.070265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.070291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.070424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.070449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.070531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.070557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.070663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.070706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.070846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.070875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.071014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.071041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.071146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.071174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.071301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.071335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.071423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.071451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.071562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.071590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.071739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.071767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.071877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.071912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.072013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.072039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.072149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.072192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.072348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.072376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.072463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.072491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.072606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.072634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.072756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.072785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.794 [2024-10-17 16:55:07.072931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.794 [2024-10-17 16:55:07.072959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.794 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.073100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.073140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.073259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.073303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.073402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.073430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.073552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.073580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.073679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.073707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.073799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.073826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.073953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.073978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.074063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.074090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.074173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.074216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.074301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.074329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.074422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.074452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.074595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.074624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.074788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.074839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.074958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.074984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.075120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.075165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.075278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.075309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.075399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.075424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.075540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.075567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.075681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.075708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.075799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.075826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.075910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.075936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.076023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.076051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.076188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.076217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.076321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.076349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.076445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.076474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.795 qpair failed and we were unable to recover it. 00:26:53.795 [2024-10-17 16:55:07.076568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.795 [2024-10-17 16:55:07.076597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.076711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.076757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.076876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.076904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.077026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.077052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.077145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.077171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.077300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.077328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.077451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.077479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.077610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.077639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.077793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.077837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.077929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.077954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.078071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.078098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.078198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.078228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.078390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.078420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.078541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.078570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.078697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.078725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.078855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.078897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.079013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.079040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.079130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.079156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.079266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.079311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.079424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.079451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.079590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.079619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.079753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.079782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.079889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.079915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.080029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.080068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.080168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.080195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.080308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.080337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.080467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.080496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.080623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.080652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.080822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.080879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.081013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.081041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.081163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.081194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.081323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.081351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.081538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.081585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.081678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.081706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.081807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.081836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.081996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.082027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.082127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.082154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.082241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.082268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.082390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.082447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.082552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.082598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.082732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.796 [2024-10-17 16:55:07.082781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.796 qpair failed and we were unable to recover it. 00:26:53.796 [2024-10-17 16:55:07.082872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.082898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.082997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.083032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.083174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.083206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.083294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.083321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.083417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.083442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.083557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.083583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.083670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.083711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.083835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.083863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.083969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.083994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.084086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.084112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.084205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.084230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.084340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.084369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.084492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.084522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.084619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.084648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.084753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.084782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.084918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.084962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.085124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.085153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.085244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.085273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.085403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.085431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.085551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.085580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.085755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.085799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.085887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.085913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.086008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.086036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.086132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.086158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.086248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.086274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.086350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.086375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.086488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.086514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.086643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.086671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.086789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.086815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.086903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.086936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.087045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.087072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.087166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.087191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.797 qpair failed and we were unable to recover it. 00:26:53.797 [2024-10-17 16:55:07.087323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.797 [2024-10-17 16:55:07.087351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.087463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.087517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.087626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.087653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.087736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.087763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.087856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.087882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.088016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.088044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.088161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.088187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.088274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.088301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.088427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.088452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.088539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.088565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.088654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.088680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.088779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.088804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.088918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.088943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.089026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.089071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.089163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.089191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.089322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.089351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.089474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.089502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.089603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.089632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.089778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.089807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.089921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.089949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.090041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.090069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.090152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.090179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.090341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.090384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.090493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.090521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.090673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.090718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.090804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.090832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.090966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.090992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.091143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.091169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.091282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.091327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.091461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.091504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.091591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.091617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.091707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.091733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.091852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.091877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.091952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.091977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.092069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.092095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.092185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.092211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.092304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.092340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.092460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.092488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.092626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.092655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.798 [2024-10-17 16:55:07.092773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.798 [2024-10-17 16:55:07.092802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.798 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.092906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.092932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.093011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.093038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.093121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.093146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.093253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.093282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.093382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.093412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.093535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.093564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.093677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.093724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.093841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.093866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.093994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.094059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.094192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.094223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.094356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.094385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.094490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.094520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.094672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.094701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.094829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.094857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.095020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.095047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.095152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.095180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.095333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.095359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.095537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.095591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.095709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.095757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.095841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.095866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.095955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.095990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.096109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.096135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.096222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.096249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.096331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.096356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.096436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.096467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.096558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.096584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.096702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.096728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.096831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.096879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.096963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.096990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.097093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.097119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.097196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.097222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.097313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.097340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.097426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.097451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.097594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.097621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.097708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.097734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.799 [2024-10-17 16:55:07.097836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.799 [2024-10-17 16:55:07.097863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.799 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.097949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.097975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.098114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.098159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.098263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.098292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.098446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.098494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.098649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.098695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.098785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.098812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.098894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.098920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.099012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.099058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.099148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.099177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.099276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.099304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.099391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.099420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.099513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.099543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.099642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.099671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.099758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.099785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.099891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.099919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.100086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.100121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.100240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.100268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.100370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.100398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.100507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.100536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.100663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.100691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.100784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.100828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.100940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.100965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.101076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.101101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.101210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.101235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.101332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.101360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.101482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.101511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.101633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.101662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.101776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.101805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.101930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.101970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.800 [2024-10-17 16:55:07.102084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.800 [2024-10-17 16:55:07.102109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.800 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.102249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.102277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.102373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.102401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.102505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.102533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.102688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.102720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.102832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.102857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.102952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.102991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.103117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.103145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.103237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.103284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.103383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.103412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.103509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.103538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.103641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.103685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.103833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.103864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.104069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.104102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.104218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.104245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.104383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.104429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.104598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.104645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.104725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.104750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.104839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.104865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.104979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.105012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.105145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.105175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.105292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.105320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.105442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.105467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.105541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.105566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.105660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.105685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.105779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.105805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.105958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.105998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.106145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.106173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.106305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.106334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.106430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.106459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.106554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.106583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.106679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.106708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.106796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.106840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.106934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.106974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.107084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.107113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.107202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.107228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.107349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.107393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.107528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.107572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.107735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.107787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.107889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.107919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.801 qpair failed and we were unable to recover it. 00:26:53.801 [2024-10-17 16:55:07.108014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.801 [2024-10-17 16:55:07.108043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.108162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.108191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.108283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.108311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.108433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.108479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.108666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.108718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.108813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.108840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.108924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.108950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.109040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.109067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.109212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.109254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.109393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.109437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.109569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.109599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.109698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.109727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.109862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.109887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.109966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.109996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.110119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.110147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.110264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.110292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.110408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.110435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.110531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.110559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.110650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.110679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.110811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.110841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.110972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.110997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.111127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.111155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.111273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.111319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.111479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.111522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.111626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.111657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.111806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.111853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.112020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.112046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.112140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.112166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.112275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.112309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.112441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.112474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.112577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.112606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.112735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.112766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.112875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.112902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.113011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.113039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.113144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.113170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.113260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.113289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.113385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.113415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.113504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.113534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.113687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.113715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.113832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.113876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.802 [2024-10-17 16:55:07.113985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.802 [2024-10-17 16:55:07.114028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.802 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.114127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.114154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.114290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.114315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.114468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.114493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.114641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.114669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.114791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.114822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.115033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.115081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.115196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.115223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.115414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.115444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.115607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.115637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.115744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.115787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.115921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.115948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.116038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.116066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.116178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.116223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.116427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.116456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.116601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.116647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.116785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.116818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.116988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.117031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.117162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.117188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.117389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.117450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.117554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.117590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.117724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.117772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.117862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.117889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.117983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.118020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.118133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.118177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.118256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.118282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.118369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.118393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.118504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.118535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.118675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.118700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.118822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.118848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.118960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.118986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.119123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.119162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.119288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.119316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.119406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.119432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.119512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.119538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.119632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.119661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.119799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.119825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.119914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.119941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.120039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.120066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.120169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.803 [2024-10-17 16:55:07.120208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.803 qpair failed and we were unable to recover it. 00:26:53.803 [2024-10-17 16:55:07.120348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.120379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.120509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.120538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.120661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.120692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.120786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.120817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.120927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.120956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.121102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.121128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.121223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.121251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.121412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.121457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.121562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.121592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.121741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.121785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.121932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.121957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.122065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.122093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.122239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.122285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.122437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.122480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.122591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.122635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.122746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.122772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.122873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.122898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.123014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.123041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.123124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.123150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.123286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.123317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.123404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.123432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.123544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.123570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.123692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.123718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.123802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.123829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.123920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.123947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.124087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.124116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.124248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.124277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.124401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.124436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.124559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.124588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.124671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.124701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.124799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.124828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.124930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.124958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.125060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.125087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.125183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.125222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.125376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.125420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.804 qpair failed and we were unable to recover it. 00:26:53.804 [2024-10-17 16:55:07.125540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.804 [2024-10-17 16:55:07.125586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.125686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.125718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.125813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.125842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.125954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.125982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.126079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.126105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.126192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.126220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.126330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.126359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.126483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.126512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.126629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.126657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.126779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.126818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.126950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.126979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.127079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.127106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.127196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.127223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.127354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.127385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.127544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.127573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.127709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.127738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.127852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.127881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.127995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.128028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.128160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.128189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.128312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.128342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.128442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.128470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.128620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.128675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.128830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.128860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.128970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.129014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.129135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.129160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.129250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.129276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.129455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.129504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.129643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.129670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.129772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.129799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.129913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.129939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.130024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.130051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.130164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.130190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.130276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.130301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.130430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.130457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.130566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.130595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.130816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.130860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.130963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.130994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.131137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.131184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.131283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.805 [2024-10-17 16:55:07.131313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.805 qpair failed and we were unable to recover it. 00:26:53.805 [2024-10-17 16:55:07.131435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.131464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.131595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.131625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.131748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.131778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.131902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.131932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.132040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.132065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.132169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.132195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.132277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.132303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.132426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.132478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.132628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.132674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.132822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.132849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.132963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.132990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.133108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.133134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.133220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.133247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.133384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.133414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.133516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.133545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.133655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.133698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.133810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.133842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.133949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.133979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.134100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.134127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.134220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.134246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.134371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.134401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.134553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.134579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.134746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.134775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.134916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.134959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.135117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.135146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.135265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.135291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.135382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.135410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.135569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.135617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.135827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.135875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.136013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.136044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.136174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.136201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.136321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.136348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.136438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.136465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.136604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.136633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.136736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.136765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.136907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.136933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.137051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.137079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.137163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.137189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.137329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.806 [2024-10-17 16:55:07.137354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.806 qpair failed and we were unable to recover it. 00:26:53.806 [2024-10-17 16:55:07.137438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.137463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.137610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.137668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.137779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.137810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.137928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.137972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.138121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.138149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.138299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.138328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.138464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.138512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.138652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.138701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.138800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.138829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.138936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.138980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.139077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.139104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.139187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.139214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.139347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.139376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.139493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.139522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.139606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.139635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.139738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.139768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.139855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.139883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.140016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.140044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.140125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.140150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.140307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.140336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.140498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.140528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.140662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.140695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.140818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.140848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.140988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.141024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.141187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.141217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.141326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.141353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.141432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.141458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.141578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.141604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.141698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.141724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.141812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.141837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.141948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.141974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.142064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.142091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.142203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.142229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.142338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.142363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.142451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.142479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.142630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.142655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.142772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.142799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.142912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.142939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.143031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.143059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.807 [2024-10-17 16:55:07.143140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.807 [2024-10-17 16:55:07.143166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.807 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.143255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.143281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.143396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.143422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.143504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.143548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.143667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.143696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.143796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.143823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.143917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.143944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.144072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.144102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.144197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.144225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.144332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.144361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.144456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.144486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.144584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.144612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.144705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.144734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.144839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.144895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.145018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.145046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.145159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.145187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.145333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.145361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.145510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.145553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.145668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.145712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.145807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.145834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.145964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.146009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.146103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.146130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.146236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.146271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.146411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.146441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.146576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.146622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.146770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.146818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.146920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.146950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.147101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.147127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.147262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.147292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.147423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.147471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.147636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.147693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.147806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.147832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.147955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.147991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.148090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.148116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.148201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.148228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.148341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.148367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.148468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.148493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.148609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.148636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.148725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.148752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.148862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.808 [2024-10-17 16:55:07.148888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.808 qpair failed and we were unable to recover it. 00:26:53.808 [2024-10-17 16:55:07.148979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.149011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.149101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.149128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.149241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.149267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.149401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.149447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.149553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.149581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.149684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.149709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.149817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.149843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.149946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.149985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.150122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.150162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.150288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.150317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.150456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.150482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.150595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.150621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.150709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.150735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.150828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.150853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.150950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.150988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.151120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.151149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.151257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.151288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.151386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.151417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.151568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.151600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.151699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.151729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.151829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.151858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.152058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.152085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.152174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.152224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.152333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.152362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.152466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.152500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.152613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.152661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.152763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.152788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.152881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.152910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.153015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.153042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.153124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.153150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.153317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.153363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.153476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.153510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.153675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.153722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.153819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.809 [2024-10-17 16:55:07.153844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.809 qpair failed and we were unable to recover it. 00:26:53.809 [2024-10-17 16:55:07.153964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.153990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.154111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.154137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.154250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.154302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.154416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.154463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.154573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.154607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.154779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.154828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.154930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.154959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.155098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.155125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.155208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.155252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.155451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.155482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.155603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.155634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.155749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.155779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.155931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.155960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.156126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.156153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.156243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.156269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.156422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.156453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.156630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.156660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.156760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.156791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.156889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.156918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.157050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.157075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.157191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.157227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.157363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.157392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.157513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.157542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.157638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.157666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.157792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.157821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.157983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.158050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.158253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.158281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.158441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.158471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.158593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.158639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.158806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.158836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.158959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.158990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.159107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.159134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.159273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.159303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.159434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.159464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.159558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.159588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.159738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.159767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.159858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.159888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.159990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.160027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.160118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.160143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.160231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.160256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.810 qpair failed and we were unable to recover it. 00:26:53.810 [2024-10-17 16:55:07.160347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.810 [2024-10-17 16:55:07.160375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.160515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.160546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.160645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.160680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.160818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.160848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.160964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.161009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.161158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.161186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.161314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.161357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.161484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.161514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.161643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.161669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.161784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.161811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.161923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.161948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.162059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.162086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.162178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.162203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.162317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.162343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.162465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.162491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.162594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.162623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.162755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.162786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.162902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.162928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.163044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.163071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.163159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.163186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.163329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.163359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.163488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.163536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.163670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.163700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.163840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.163869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.163960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.163988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.164107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.164133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.164221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.164247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.164380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.164408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.164524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.164552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.164666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.164710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.164886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.164943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.165037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.165065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.165211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.165238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.165344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.165373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.165491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.165533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.165691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.165721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.165815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.165843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.165976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.166008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.166129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.166157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.811 [2024-10-17 16:55:07.166254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.811 [2024-10-17 16:55:07.166285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.811 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.166447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.166477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.166566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.166597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.166707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.166741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.166856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.166899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.167040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.167067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.167188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.167215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.167295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.167323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.167412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.167440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.167600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.167630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.167741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.167784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.167879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.167909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.168053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.168080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.168161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.168186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.168332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.168361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.168512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.168540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.168639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.168667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.168801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.168831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.168940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.168967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.169082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.169110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.169208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.169235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.169365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.169394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.169546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.169575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.169702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.169732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.169824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.169854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.169959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.169985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.170121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.170148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.170244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.170270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.170407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.170437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.170555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.170584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.170736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.170784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.170899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.170925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.171042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.171073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.171171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.171198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.171313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.171343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.171438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.171468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.171564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.171593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.171759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.171816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.171935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.171962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.172066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.172093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.812 [2024-10-17 16:55:07.172183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.812 [2024-10-17 16:55:07.172209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.812 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.172355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.172381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.172505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.172550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.172684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.172731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.172888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.172916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.173021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.173074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.173158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.173183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.173282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.173307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.173416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.173443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.173552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.173581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.173682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.173709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.173882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.173910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.174013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.174058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.174172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.174197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.174277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.174302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.174386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.174412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.174534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.174563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.174689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.174731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.174820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.174849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.174970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.175018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.175120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.175147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.175320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.175363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.175449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.175474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.175602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.175646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.175731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.175757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.175870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.175896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.176011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.176036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.176123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.176165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.176292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.176321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.176424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.176449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.176567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.176592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.176707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.176749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.176831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.176856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.176991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.177038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.813 [2024-10-17 16:55:07.177142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.813 [2024-10-17 16:55:07.177171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.813 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.177295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.177339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.177458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.177487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.177599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.177627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.177767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.177807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.177932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.177958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.178056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.178082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.178164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.178189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.178297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.178322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.178465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.178490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.178592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.178639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.178766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.178794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.178910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.178966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.179099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.179127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.179236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.179263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.179389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.179416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.179597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.179644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.179781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.179829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.179979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.180019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.180126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.180155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.180260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.180289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.180414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.180444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.180597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.180643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.180806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.180853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.180949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.180974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.181110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.181138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.181252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.181278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.181394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.181420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.181535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.181562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.181659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.181684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.181778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.181804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.181903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.181941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.182084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.182113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.182214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.182253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.182351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.182378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.182497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.182524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.182610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.182635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.182775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.182823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.182980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.183026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.183149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.814 [2024-10-17 16:55:07.183182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.814 qpair failed and we were unable to recover it. 00:26:53.814 [2024-10-17 16:55:07.183284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.183314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.183520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.183569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.183739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.183789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.183914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.183943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.184049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.184077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.184193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.184238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.184391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.184420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.184571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.184601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.184752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.184785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.184922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.184950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.185067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.185098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.185213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.185240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.185374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.185403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.185496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.185526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.185615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.185644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.185799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.185845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.185930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.185955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.186068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.186108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.186193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.186220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.186365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.186399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.186565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.186598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.186717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.186753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.186926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.186955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.187075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.187101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.187190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.187216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.187332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.187359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.187483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.187512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.187625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.187653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.187783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.187827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.187934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.187963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.188108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.188137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.188252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.188298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.188400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.188429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.188522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.188551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.188688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.188718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.188844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.188884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.188986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.189025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.189123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.189150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.189240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.189266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.815 [2024-10-17 16:55:07.189442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.815 [2024-10-17 16:55:07.189488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.815 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.189599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.189643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.189792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.189820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.189909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.189939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.190071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.190111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.190201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.190229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.190355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.190384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.190520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.190573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.190714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.190761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.190900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.190929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.191022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.191049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.191129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.191173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.191263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.191292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.191414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.191463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.191607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.191650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.191743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.191771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.191897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.191926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.192068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.192098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.192262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.192307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.192413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.192456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.192577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.192621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.192712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.192738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.192847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.192873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.192968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.192994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.193091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.193118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.193216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.193243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.193357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.193383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.193525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.193552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.193644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.193670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.193745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.193771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.193859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.193885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.193975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.194008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.194104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.194133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.194222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.194252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.194406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.194435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.194584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.194615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.194715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.194742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.194875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.194916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.195039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.195071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.816 [2024-10-17 16:55:07.195189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.816 [2024-10-17 16:55:07.195215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.816 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.195348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.195376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.195510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.195557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.195652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.195681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.195802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.195831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.195947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.195975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.196113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.196140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.196223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.196249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.196360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.196387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.196535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.196565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.196683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.196711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.196864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.196904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.197028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.197057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.197182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.197228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.197352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.197382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.197525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.197578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.197785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.197814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.197906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.197937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.198042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.198086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.198206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.198232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.198322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.198347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.198495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.198521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.198690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.198719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.198848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.198878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.199007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.199037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.199141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.199167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.199264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.199290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.199389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.199420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.199527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.199554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.199706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.199739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.199847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.199873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.199958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.199985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.200108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.200136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.200224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.200269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.200371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.200400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.817 [2024-10-17 16:55:07.200529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.817 [2024-10-17 16:55:07.200557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.817 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.200672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.200701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.200825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.200856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.200978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.201013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.201118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.201149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.201253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.201283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.201412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.201441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.201594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.201623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.201759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.201803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.201945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.201986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.202152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.202190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.202305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.202336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.202525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.202576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.202691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.202740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.202866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.202894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.203024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.203051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.203167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.203193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.203266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.203291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.203429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.203458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.203607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.203635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.203788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.203818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.203938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.203997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.204154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.204182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.204260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.204286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.204419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.204452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.204583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.204630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.204727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.204777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.204921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.204952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.205079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.205105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.205260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.205289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.205388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.205417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.205564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.205608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.205760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.205788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.205937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.205977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.206119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.206159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.206298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.206330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.206448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.206478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.206574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.206605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.206754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.206786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.206921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.206948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.207080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.818 [2024-10-17 16:55:07.207110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.818 qpair failed and we were unable to recover it. 00:26:53.818 [2024-10-17 16:55:07.207224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.207251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.207358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.207386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.207482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.207512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.207609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.207640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.207753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.207781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.207867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.207895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.207985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.208019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.208117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.208145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.208259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.208286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.208373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.208399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.208491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.208517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.208608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.208635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.208728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.208755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.208864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.208890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.209004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.209030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.209124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.209149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.209261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.209287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.209402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.209431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.209555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.209585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.209714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.209743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.209882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.209910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.210009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.210036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.210124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.210169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.210323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.210367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.210483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.210508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.210604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.210629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.210715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.210741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.210860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.210887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.211009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.211037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.211124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.211151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.211281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.211325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.211446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.211474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.211558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.211583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.211698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.211724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.211805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.211831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.211914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.211941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.212027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.212054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.212165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.212192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.212285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.212311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.212450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.212495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.212609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.819 [2024-10-17 16:55:07.212657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.819 qpair failed and we were unable to recover it. 00:26:53.819 [2024-10-17 16:55:07.212770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.212796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.212890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.212916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.212991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.213028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.213137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.213166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.213318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.213348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.213490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.213540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.213688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.213734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.213863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.213892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.213993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.214059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.214206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.214239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.214337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.214367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.214522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.214552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.214704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.214734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.214857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.214888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.214984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.215019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.215122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.215167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.215274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.215305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.215458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.215508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.215652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.215700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.215829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.215858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.215956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.216015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.216145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.216174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.216261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.216291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.216414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.216444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.216595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.216624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.216726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.216755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.216901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.216946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.217080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.217109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.217263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.217307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.217436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.217486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.217619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.217668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.217753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.217779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.217863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.217890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.217972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.218008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.218153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.218196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.218313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.218358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.218513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.218547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.218646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.218672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.218753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.218781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.218892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.218918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.820 [2024-10-17 16:55:07.219071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.820 [2024-10-17 16:55:07.219101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.820 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.219198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.219228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.219355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.219385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.219511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.219539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.219702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.219746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.219864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.219889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.219974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.220012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.220122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.220151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.220274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.220302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.220478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.220521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.220649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.220679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.220803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.220831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.220938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.220980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.221128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.221157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.221253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.221281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.221402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.221430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.221692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.221751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.221892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.221916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.222033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.222060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.222188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.222216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.222341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.222369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.222486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.222514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.222644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.222671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.222820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.222848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.222996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.223051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.223181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.223208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.223346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.223389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.223519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.223591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.223722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.223746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.223859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.223884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.223983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.224017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.224156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.224187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.224311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.224339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.224421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.224449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.224573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.224601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.224774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.224818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.224946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.224973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.225085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.225111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.225198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.225223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.225363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.225388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.225491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.225520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.821 [2024-10-17 16:55:07.225647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.821 [2024-10-17 16:55:07.225675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.821 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.225842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.225884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.226024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.226062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.226229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.226274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.226412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.226456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.226596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.226621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.226726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.226753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.226871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.226897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.227022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.227069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.227195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.227221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.227347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.227373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.227487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.227512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.227623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.227648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.227735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.227761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.227852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.227878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.227972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.228021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.228178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.228209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.228312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.228340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.228494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.228546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.228779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.228852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.228974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.229011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.229169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.229197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.229302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.229327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.229507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.229567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.229713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.229769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.229899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.229926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.230077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.230118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.230259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.230304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.230433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.230461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.230650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.230698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.230814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.230841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.230917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.230942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.231048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.231088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.231202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.231228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.231365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.231403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.231500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.231526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.822 qpair failed and we were unable to recover it. 00:26:53.822 [2024-10-17 16:55:07.231642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.822 [2024-10-17 16:55:07.231668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.231786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.231812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.231898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.231925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.232040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.232070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.232168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.232194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.232327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.232355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.232473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.232504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.232615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.232643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.232844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.232874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.232967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.232992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.233129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.233157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.233260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.233289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.233397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.233442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.233621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.233650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.233806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.233832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.233969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.233995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.234128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.234170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.234305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.234334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.234458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.234486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.234647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.234676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.234788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.234818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.234972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.235007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.235169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.235194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.235272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.235316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.235446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.235474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.235588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.235634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.235777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.235803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.235885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.235912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.236013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.236040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.236134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.236160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.236291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.236329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.236423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.236449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.236544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.236582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.236706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.236733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.236827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.236854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.236973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.236998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.237150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.237178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.237303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.237331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.237459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.237487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.237610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.237639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.823 [2024-10-17 16:55:07.237777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.823 [2024-10-17 16:55:07.237807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:53.823 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.237943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.237970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.238080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.238124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.238249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.238276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.238404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.238433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.238533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.238562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.238689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.238723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.238819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.238860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.238951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.238977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.239117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.239142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.239251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.239294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.239403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.239430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.239518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.239545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.239661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.239689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.239874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.239901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.240019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.240061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.240168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.240193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.240331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.240358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.241900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.241941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.242098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.242126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.242245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.242288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.242403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.242446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.242578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.242607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.242707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.242735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.242860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.824 [2024-10-17 16:55:07.242887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-10-17 16:55:07.242994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.661898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.662153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.662185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.662314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.662340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.662464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.662489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.662602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.662629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.662746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.662771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.662868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.662894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.663009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.663063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.663157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.663182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.663276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.663308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.663477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.663502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.663630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.663655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.663766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.663791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.663899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.663927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.664043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.664069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.664165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.664190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.664298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.664323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.664441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.664466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.664623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.664651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.664801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.664826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.664947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.664972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.665073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.665098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.665198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.665231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.665381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.665407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.665489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.665514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.665608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.665633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.665772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.665797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.665879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.665907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.666057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.666087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.666192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.666218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.666319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.666346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.666465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.666493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.094 [2024-10-17 16:55:07.666580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.094 [2024-10-17 16:55:07.666606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.094 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.666703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.666739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.666879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.666909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.667048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.667075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.667169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.667196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.667304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.667334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.667496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.667524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.667651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.667694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.667830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.667857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.667943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.667969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.668101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.668128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.668245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.668272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.668404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.668431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.668525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.668552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.668669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.668695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.668806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.668833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.668926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.668954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.669100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.669132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.669216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.669243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.669362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.669389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.669531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.669558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.669667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.669694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.669809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.669836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.669940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.669970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.670110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.670137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.670250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.670278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.670403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.670433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.670566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.670594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.670709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.670751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.670857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.670886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.671013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.671040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.671131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.671158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.671269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.671295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.671381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.671408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.671521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.671548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.671658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.671688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.671823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.095 [2024-10-17 16:55:07.671851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.095 qpair failed and we were unable to recover it. 00:26:54.095 [2024-10-17 16:55:07.671948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.671975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.672075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.672102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.672189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.672216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.672298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.672325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.672426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.672456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.672594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.672621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.672714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.672741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.672817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.672848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.672967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.672994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.673083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.673110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.673250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.673280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.673385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.673412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.673491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.673519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.673603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.673629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.673774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.673800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.673884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.673911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.674049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.674094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.674202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.674229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.674317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.674344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.674478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.674507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.674651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.674678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.674793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.674823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.674941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.674971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.675115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.675142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.675227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.675254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.675412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.675441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.675577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.675605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.675689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.675716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.675833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.675863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.676007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.676035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.676129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.676156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.676270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.676316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.676433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.676460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.676601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.676628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.676763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.676793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.676945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.676972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.677082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.677110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.677197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.677223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.096 qpair failed and we were unable to recover it. 00:26:54.096 [2024-10-17 16:55:07.677365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.096 [2024-10-17 16:55:07.677392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.677512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.677539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.677627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.677654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.677744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.677771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.677878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.677905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.678023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.678051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.678136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.678161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.678279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.678305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.678446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.678473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.678584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.678610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.678703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.678731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.678807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.678834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.678922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.678949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.679033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.679059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.679169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.679196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.679282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.679308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.679396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.679423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.679588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.679619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.679755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.679782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.679898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.679926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.680026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.680056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.680167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.680194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.680278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.680305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.680446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.680472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.680588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.680615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.680733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.680760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.680894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.680923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.681051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.681078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.681213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.681240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.681375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.681404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.681534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.681560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.681698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.681725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.681865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.681894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.681986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.682037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.682152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.682179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.097 [2024-10-17 16:55:07.682312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.097 [2024-10-17 16:55:07.682341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.097 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.682445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.682472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.682585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.682616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.682761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.682788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.682932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.682958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.683076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.683103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.683214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.683243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.683339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.683366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.683513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.683540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.683669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.683699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.683828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.683855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.683947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.683974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.684123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.684151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.684289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.684316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.684437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.684480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.684609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.684638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.684806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.684834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.684974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.685070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.685188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.685215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.685298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.685325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.685420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.685447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.685584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.685613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.685709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.685735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.685855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.685882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.685990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.686043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.686136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.686164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.686302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.686329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.686461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.686490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.686623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.686650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.686729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.098 [2024-10-17 16:55:07.686759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.098 qpair failed and we were unable to recover it. 00:26:54.098 [2024-10-17 16:55:07.686870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.686897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.687034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.687061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.687153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.687180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.687338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.687365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.687482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.687509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.687612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.687638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.687778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.687819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.687957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.687984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.688104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.688131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.688221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.688248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.688392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.688419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.688513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.688555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.688677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.688706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.688831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.688861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.689045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.689074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.689199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.689225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.689327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.689355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.689452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.689480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.689573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.689600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.689714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.689741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.689848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.689875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.689967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.689994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.690116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.690143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.690280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.690323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.690442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.690472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.690606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.690633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.690719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.690753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.690850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.690880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.691026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.691053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.691139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.691166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.691297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.691326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.691436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.691463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.691602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.691629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.691728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.691758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.099 qpair failed and we were unable to recover it. 00:26:54.099 [2024-10-17 16:55:07.691909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.099 [2024-10-17 16:55:07.691936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.692059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.692086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.692245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.692275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.692392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.692418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.692534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.692561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.692673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.692700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.692816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.692843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.692936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.692962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.693072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.693119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.693242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.693269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.693387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.693413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.693531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.693560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.693691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.693718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.693860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.693886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.694027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.694072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.694166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.694194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.694321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.694348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.694495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.694524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.694636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.694664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.694772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.694798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.694905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.694935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.695077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.695104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.695218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.695246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.695382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.695412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.695572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.695599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.695715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.695742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.695834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.695861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.695973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.696006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.696166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.696195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.696287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.696316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.696413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.696439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.696523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.696550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.696629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.696654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.696799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.696830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.696945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.696972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.697093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.697120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.100 qpair failed and we were unable to recover it. 00:26:54.100 [2024-10-17 16:55:07.697202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.100 [2024-10-17 16:55:07.697229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.697318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.697345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.697426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.697453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.697559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.697585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.697699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.697726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.697830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.697859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.698032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.698076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.698175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.698202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.698300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.698344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.698485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.698512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.698630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.698657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.698778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.698809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.698921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.698948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.699036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.699062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.699146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.699173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.699287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.699314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.699431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.699459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.699599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.699626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.699783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.699809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.699918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.699945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.700107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.700137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.700305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.700332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.700424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.700450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.700543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.700570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.700691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.700721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.700800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.700827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.700954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.700984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.701133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.701160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.701279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.701306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.701449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.701478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.701589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.701616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.701710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.701736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.701871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.701901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.701989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.702041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.702136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.702163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.702252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.702279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.702394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.702421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.702506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.702534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.702678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.101 [2024-10-17 16:55:07.702705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.101 qpair failed and we were unable to recover it. 00:26:54.101 [2024-10-17 16:55:07.702810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.702840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.702937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.702967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.703074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.703101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.703237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.703264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.703346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.703372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.703486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.703512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.703647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.703673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.703787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.703814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.703977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.704015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.704187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.704214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.704332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.704377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.704465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.704495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.704655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.704685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.704806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.704833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.704992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.705037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.705131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.705159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.705246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.705272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.705388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.705415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.705527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.705554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.705647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.705673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.705766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.705810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.705932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.705958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.706106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.706133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.706263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.706310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.706450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.706478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.706593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.706620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.706742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.706768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.706892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.706922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.707052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.707078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.707198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.707224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.707334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.707361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.707474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.707502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.707613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.707657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.707760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.707786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.707903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.707929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.708045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.708075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.708182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.708209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.102 [2024-10-17 16:55:07.708348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.102 [2024-10-17 16:55:07.708375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.102 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.708489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.708520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.708648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.708674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.708785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.708812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.708929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.708956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.709089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.709116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.709223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.709250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.709327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.709354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.709434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.709460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.709540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.709567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.709707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.709734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.709881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.709908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.709990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.710022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.710099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.710126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.710217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.710244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.710362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.710388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.710523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.710569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.710723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.710753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.710845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.710871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.710956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.710984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.711091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.711119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.711235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.711263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.711408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.711438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.711554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.711592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.711700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.711725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.711841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.711870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.712049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.712076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.712190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.712217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.712306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.712333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.712421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.712448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.712590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.103 [2024-10-17 16:55:07.712617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.103 qpair failed and we were unable to recover it. 00:26:54.103 [2024-10-17 16:55:07.712733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.712760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.712845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.712872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.712951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.712978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.713097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.713129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.713264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.713291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.713408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.713435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.713525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.713552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.713634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.713662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.713775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.713802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.713965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.713995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.714136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.714163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.714292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.714336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.714433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.714463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.714620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.714647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.714758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.714785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.714924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.714954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.715095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.715122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.715243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.715270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.715359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.715386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.715499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.715525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.715641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.715667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.715806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.715835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.715976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.716011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.716126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.716153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.716245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.716272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.716355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.716381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.716491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.716519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.716672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.716701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.716852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.716882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.716992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.717027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.717116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.717143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.717258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.717285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.717371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.717397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.717525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.717555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.717665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.717691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.717813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.717840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.717982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.718036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.104 [2024-10-17 16:55:07.718173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.104 [2024-10-17 16:55:07.718202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.104 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.718315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.718343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.718441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.718467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.718543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.718569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.718659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.718684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.718764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.718789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.718929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.718955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.719042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.719069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.719151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.719177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.719290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.719317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.719402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.719429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.719571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.719598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.719705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.719732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.719851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.719879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.720059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.720088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.720231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.720259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.720395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.720425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.720544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.720574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.720736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.720763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.720885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.720913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.721028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.721073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.721193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.721220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.721304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.721331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.721465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.721494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.721636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.721663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.721783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.721809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.721911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.721940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.722050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.722077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.722151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.722176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.722277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.722306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.722422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.722449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.722540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.722567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.722736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.722766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.722872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.722899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.722988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.723020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.105 qpair failed and we were unable to recover it. 00:26:54.105 [2024-10-17 16:55:07.723134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.105 [2024-10-17 16:55:07.723162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.723254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.723281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.723373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.723399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.723535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.723565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.723709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.723735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.723846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.723872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.724041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.724068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.724183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.724209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.724326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.724353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.724463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.724493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.724629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.724655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.724779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.724806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.724914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.724943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.725066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.725093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.725182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.725208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.725303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.725347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.725476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.725503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.725589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.725615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.725704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.725734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.725842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.725870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.725987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.726021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.726163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.726190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.726280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.726306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.726400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.726428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.726542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.726569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.726683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.726710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.726848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.726910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.727027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.727058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.727163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.727191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.727310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.727337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.727426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.727453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.727561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.727588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.727672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.727700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.727805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.727834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.727942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.727968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.728089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.728116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.106 [2024-10-17 16:55:07.728223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.106 [2024-10-17 16:55:07.728250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.106 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.728356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.728383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.728524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.728551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.728658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.728684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.728771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.728798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.728906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.728933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.729079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.729107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.729200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.729228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.729320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.729346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.729447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.729476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.729616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.729643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.729780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.729807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.729940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.729970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.730084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.730112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.730202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.730228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.730382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.730425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.730533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.730560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.730648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.730674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.730771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.730800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.730906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.730933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.731028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.731058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.731147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.731174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.731293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.731321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.731462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.731505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.731604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.731634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.731772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.731799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.731913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.731939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.732072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.732100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.732187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.732215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.732331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.732359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.732469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.732496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.732579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.732606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.732695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.732721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.732858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.732904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.733072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.733102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.733217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.733246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.733410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.733442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.733582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.733610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.733709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.733738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.733829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.107 [2024-10-17 16:55:07.733856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.107 qpair failed and we were unable to recover it. 00:26:54.107 [2024-10-17 16:55:07.733994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.734029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.734168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.734194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.734324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.734353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.734464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.734490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.734566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.734593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.734683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.734728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.734856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.734882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.735024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.735051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.735138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.735165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.735249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.735276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.735395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.735422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.735544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.735574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.735738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.735768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.735863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.735889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.735976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.736011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.736122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.736149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.736230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.736256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.736410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.736438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.736533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.736561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.736679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.736706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.736853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.736883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.736985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.737020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.737120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.737158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.737357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.737421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.737558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.737586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.737727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.737770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.737933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.737963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.738106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.738134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.738251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.738278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.738396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.738424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.738515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.738542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.738668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.738696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.108 [2024-10-17 16:55:07.738821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.108 [2024-10-17 16:55:07.738850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.108 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.738968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.738995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.739118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.739145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.739227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.739252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.739336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.739362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.739453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.739480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.739573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.739619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.739783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.739814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.739928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.739954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.740130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.740157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.740278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.740305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.740382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.740408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.740516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.740542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.740658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.740685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.740796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.740823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.740975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.741011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.741090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.741117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.741232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.741258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.741357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.741384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.741473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.741500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.741620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.741647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.741737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.741764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.741846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.741872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.741954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.741981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.742074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.742100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.742181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.742207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.742293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.742320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.742402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.742430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.742516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.742542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.742642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.742683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.742774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.742803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.742921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.742962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.743064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.743092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.743208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.743237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.743319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.743350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.743459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.743486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.743601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.109 [2024-10-17 16:55:07.743628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.109 qpair failed and we were unable to recover it. 00:26:54.109 [2024-10-17 16:55:07.743767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.743796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.743916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.743946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.744055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.744082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.744193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.744220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.744301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.744329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.744417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.744444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.744555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.744582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.744691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.744721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.744866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.744911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.745082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.745113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.745232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.745260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.745384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.745427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.745553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.745584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.745737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.745768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.745904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.745934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.746063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.746094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.746211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.746239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.746333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.746361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.746511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.746539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.746658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.746688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.746797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.746828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.746946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.746976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.747121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.747149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.747268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.747312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.747427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.747462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.747580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.747624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.747748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.747780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.747916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.747947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.748081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.748123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.748220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.748249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.748365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.748393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.748478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.748507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.748600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.748628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.748737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.748765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.748849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.748878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.749011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.749040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.749129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.749156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.749278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.749305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.749452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.110 [2024-10-17 16:55:07.749478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.110 qpair failed and we were unable to recover it. 00:26:54.110 [2024-10-17 16:55:07.749623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.749650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.749770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.749800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.749904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.749934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.750049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.750106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.750206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.750234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.750384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.750413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.750560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.750603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.750754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.750783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.750907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.750937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.751082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.751109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.751225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.751251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.751367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.751396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.751533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.751563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.751688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.751717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.751817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.751847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.751944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.751971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.752074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.752102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.752211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.752237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.752387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.752416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.752513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.752543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.752639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.752666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.752800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.752829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.752948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.752978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.753119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.753160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.753314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.753376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.753524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.753573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.753717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.753745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.753863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.753892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.754016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.754050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.754157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.754187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.754376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.754426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.754563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.754612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.754698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.754725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.754845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.754872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.754963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.754991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.755130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.111 [2024-10-17 16:55:07.755159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.111 qpair failed and we were unable to recover it. 00:26:54.111 [2024-10-17 16:55:07.755260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.755289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.755441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.755470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.755590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.755619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.755719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.755763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.755853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.755880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.755963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.755996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.756125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.756151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.756275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.756304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.756427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.756484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.756642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.756671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.756802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.756832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.756947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.756973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.757106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.757133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.757247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.757277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.757413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.757443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.757576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.757605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.757695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.757724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.757861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.757888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.757977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.758013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.758102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.758127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.758222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.758251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.758437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.758463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.758561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.758588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.758714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.758741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.758827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.758853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.758938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.758965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.759080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.759107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.759191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.759218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.759307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.759334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.759428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.759457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.759555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.759586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.759667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.759696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.759820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.759849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.759990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.760060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.112 [2024-10-17 16:55:07.760174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.112 [2024-10-17 16:55:07.760202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.112 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.760341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.760371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.760460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.760497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.760608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.760639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.760810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.760866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.761015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.761069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.761188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.761215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.761320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.761349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.761515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.761590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.761703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.761743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.761847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.761878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.762012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.762065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.762167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.762196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.762305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.762352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.762486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.762531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.762632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.762660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.762747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.762775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.762871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.762907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.762995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.763034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.763158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.763186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.763271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.763306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.763389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.763434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.763555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.763585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.763686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.763717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.763815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.763846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.763953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.763980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.764088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.764116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.764241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.764267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.764403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.764446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.113 qpair failed and we were unable to recover it. 00:26:54.113 [2024-10-17 16:55:07.764575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.113 [2024-10-17 16:55:07.764623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.764744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.764770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.764915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.764942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.765070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.765098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.765186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.765213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.765308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.765352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.765447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.765477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.765604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.765639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.765764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.765793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.765942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.765971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.766126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.766153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.766239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.766265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.766416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.766452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.766554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.766584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.766705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.766735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.766818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.766862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.766978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.767013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.767133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.767159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.767243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.767269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.767386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.767415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.767523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.767566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.767736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.767766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.767915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.767942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.768045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.768073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.768168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.768196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.768285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.768336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.768487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.768517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.768680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.768733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.768827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.768857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.768989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.769049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.769174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.769201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.769309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.769335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.769461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.769491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.769599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.769625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.769777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.769811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.769906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.769950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.114 qpair failed and we were unable to recover it. 00:26:54.114 [2024-10-17 16:55:07.770049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.114 [2024-10-17 16:55:07.770076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.770160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.770186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.770322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.770351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.770478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.770521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.770638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.770667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.770760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.770789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.770891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.770916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.771046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.771074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.771189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.771215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.771356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.771382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.771510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.771540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.771635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.771664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.771782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.771825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.771933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.771962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.772127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.772154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.772235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.772262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.772379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.772405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.772532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.772561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.772685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.772714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.772838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.772867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.772995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.773053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.773142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.773169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.773259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.773286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.773456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.773485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.115 [2024-10-17 16:55:07.773595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.115 [2024-10-17 16:55:07.773639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.115 qpair failed and we were unable to recover it. 00:26:54.406 [2024-10-17 16:55:07.773742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.406 [2024-10-17 16:55:07.773773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.406 qpair failed and we were unable to recover it. 00:26:54.406 [2024-10-17 16:55:07.773900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.406 [2024-10-17 16:55:07.773957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.406 qpair failed and we were unable to recover it. 00:26:54.406 [2024-10-17 16:55:07.774118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.406 [2024-10-17 16:55:07.774146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.406 qpair failed and we were unable to recover it. 00:26:54.406 [2024-10-17 16:55:07.774262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.774316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.774485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.774512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.774663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.774693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.774821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.774851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.774942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.774988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.775097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.775124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.775240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.775266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.775418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.775447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.775547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.775573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.775707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.775736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.775864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.775894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.776022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.776060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.776152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.776178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.776259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.776297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.776396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.776425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.776540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.776569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.776720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.776750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.776875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.776903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.777020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.777057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.777148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.777174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.777295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.777355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.777494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.777526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.777700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.777732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.777835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.777864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.777971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.778013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.778153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.778180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.778294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.778321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.778508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.778534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.778648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.407 [2024-10-17 16:55:07.778677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.407 qpair failed and we were unable to recover it. 00:26:54.407 [2024-10-17 16:55:07.778802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.778831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.778929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.778972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.779078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.779105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.779196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.779223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.779324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.779354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.779452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.779478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.779582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.779613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.779710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.779740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.779838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.779864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.779957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.779985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.780150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.780176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.780258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.780286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.780411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.780438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.780575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.780604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.780719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.780763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.780890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.780920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.781088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.781115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.781230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.781257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.781388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.781431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.781527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.781558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.781746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.781776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.781905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.781935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.782080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.782111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.782203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.782231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.782320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.782347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.782511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.782541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.782663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.782692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.782807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.782837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.782971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.783025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.783164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.783193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.783288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.783319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.783436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.408 [2024-10-17 16:55:07.783463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.408 qpair failed and we were unable to recover it. 00:26:54.408 [2024-10-17 16:55:07.783619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.783649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.783798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.783842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.783945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.783976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.784167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.784208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.784405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.784465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.784552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.784580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.784737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.784788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.784904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.784932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.785018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.785056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.785189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.785234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.785423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.785483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.785647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.785706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.785826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.785853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.785945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.785973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.786141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.786186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.786329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.786393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.786612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.786666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.786810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.786837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.786952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.786979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.787140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.787185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.787345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.787377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.787531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.787561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.787652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.787682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.787800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.787845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.788012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.788070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.788186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.788213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.788345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.788374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.788545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.788596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.788729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.788782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.788904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.409 [2024-10-17 16:55:07.788933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.409 qpair failed and we were unable to recover it. 00:26:54.409 [2024-10-17 16:55:07.789071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.789103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.789200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.789229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.789363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.789409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.789544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.789589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.789697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.789728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.789885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.789912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.790008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.790036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.790127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.790155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.790286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.790343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.790434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.790465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.790634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.790726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.790819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.790846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.790960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.790986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.791119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.791159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.791289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.791318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.791421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.791447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.791588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.791614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.791704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.791731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.791822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.791849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.791933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.791960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.792083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.792110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.792221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.792247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.792387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.792414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.792537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.792564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.792669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.792699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.792820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.792846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.792967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.792993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.793103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.793134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.793234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.793277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.793445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.793474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.793580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.793606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.410 [2024-10-17 16:55:07.793746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.410 [2024-10-17 16:55:07.793775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.410 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.793934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.793963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.794151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.794181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.794274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.794313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.794463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.794492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.794591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.794621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.794793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.794852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.794951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.794977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.795088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.795127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.795250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.795295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.795461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.795524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.795754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.795785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.795943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.795973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.796130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.796170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.796344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.796376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.796475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.796507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.796657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.796748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.796953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.796983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.797169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.797196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.797278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.797333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.797442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.797485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.797590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.797621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.797741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.797770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.797875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.797913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.798072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.798100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.798192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.798219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.798316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.798360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.798485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.798514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.798664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.798694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.798813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.798842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.798961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.798993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.411 [2024-10-17 16:55:07.799134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.411 [2024-10-17 16:55:07.799160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.411 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.799243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.799271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.799367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.799394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.799543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.799588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.799702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.799747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.799873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.799903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.800052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.800080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.800197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.800224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.800409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.800477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.800665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.800723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.800870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.800897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.801013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.801040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.801118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.801151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.801233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.801259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.801374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.801401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.801524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.801554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.801744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.801774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.801923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.801952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.802095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.802122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.802232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.802262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.802358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.802385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.802553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.802582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.802709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.802752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.802883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.802924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.803014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.803057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.803172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.412 [2024-10-17 16:55:07.803198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.412 qpair failed and we were unable to recover it. 00:26:54.412 [2024-10-17 16:55:07.803318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.803345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.803478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.803508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.803693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.803723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.803891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.803932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.804074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.804102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.804234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.804264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.804375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.804405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.804588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.804644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.804871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.804926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.805075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.805103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.805245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.805288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.805476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.805504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.805673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.805723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.805814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.805841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.805995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.806055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.806181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.806208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.806377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.806407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.806520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.806586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.806746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.806784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.806945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.806975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.807134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.807175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.807343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.807401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.807508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.807553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.807726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.807784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.807899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.807926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.808018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.808056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.808158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.808189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.808283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.808324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.808458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.808489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.808621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.808651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.808815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.808841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.808949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.808976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.413 [2024-10-17 16:55:07.809083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.413 [2024-10-17 16:55:07.809113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.413 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.809260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.809316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.809484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.809516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.809609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.809637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.809758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.809787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.809925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.809970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.810123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.810152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.810308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.810342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.810478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.810507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.810597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.810628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.810748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.810778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.810893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.810937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.811058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.811090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.811232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.811260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.811390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.811420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.811618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.811670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.811851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.811906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.811998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.812049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.812163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.812191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.812319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.812351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.812477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.812507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.812659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.812689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.812848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.812881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.812985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.813020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.813150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.813181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.813312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.813384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.813503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.813534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.813629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.813659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.813776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.813812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.813930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.813960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.814098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.814126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.814244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.414 [2024-10-17 16:55:07.814270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.414 qpair failed and we were unable to recover it. 00:26:54.414 [2024-10-17 16:55:07.814402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.814432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.814543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.814586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.814683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.814712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.814803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.814833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.814967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.814993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.815112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.815139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.815230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.815257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.815390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.815420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.815573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.815603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.815743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.815787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.815964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.815994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.816111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.816148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.816247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.816273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.816386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.816417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.816597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.816663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.816762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.816793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.816930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.816957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.817068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.817115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.817236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.817265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.817381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.817472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.817655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.817703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.817828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.817859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.818008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.818050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.818165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.818197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.818318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.818345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.818510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.818553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.818681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.818752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.818873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.818904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.819008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.819040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.415 [2024-10-17 16:55:07.819157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.415 [2024-10-17 16:55:07.819184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.415 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.819324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.819368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.819571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.819601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.819724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.819754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.819860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.819891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.820019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.820063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.820150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.820176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.820272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.820298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.820378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.820421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.820549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.820579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.820708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.820738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.820839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.820884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.821031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.821073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.821196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.821225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.821314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.821343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.821433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.821460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.821552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.821580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.821674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.821703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.821788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.821816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.821931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.821958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.822067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.822095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.822197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.822237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.822466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.822521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.822722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.822780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.822979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.823016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.823144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.823178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.823262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.823287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.823470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.823519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.823637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.823681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.823815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.823846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.823945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.823987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.824087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.824115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.824198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.824224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.824376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.824429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.824540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.824585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.824826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.824883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.825047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.416 [2024-10-17 16:55:07.825087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.416 qpair failed and we were unable to recover it. 00:26:54.416 [2024-10-17 16:55:07.825208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.825237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.825382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.825427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.825578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.825608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.825720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.825763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.825891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.825924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.826047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.826076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.826193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.826220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.826306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.826333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.826437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.826467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.826579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.826606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.826705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.826734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.826869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.826901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.827040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.827068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.827147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.827172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.827305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.827336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.827525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.827554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.827651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.827682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.827812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.827843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.827977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.828011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.828157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.828184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.828272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.828316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.828453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.828496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.828673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.828722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.828872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.828902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.829033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.829066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.829182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.829211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.829320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.829350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.417 qpair failed and we were unable to recover it. 00:26:54.417 [2024-10-17 16:55:07.829538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.417 [2024-10-17 16:55:07.829568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.829748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.829803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.829914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.829971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.830096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.830123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.830241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.830287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.830450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.830477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.830640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.830710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.830834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.830873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.831011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.831057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.831180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.831208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.831367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.831397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.831532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.831560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.831724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.831767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.831881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.831909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.831990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.832024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.832135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.832172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.832293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.832319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.832475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.832504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.832672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.832727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.832826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.832855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.833025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.833071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.833183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.833210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.833340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.833369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.833550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.833608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.833779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.833842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.833940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.833970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.834073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.834101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.834196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.834224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.834360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.834406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.834508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.834553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.834658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.834688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.834815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.834844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.834934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.834963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.835128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.418 [2024-10-17 16:55:07.835155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.418 qpair failed and we were unable to recover it. 00:26:54.418 [2024-10-17 16:55:07.835264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.835290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.835405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.835432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.835554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.835584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.835700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.835743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.835901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.835946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.836123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.836153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.836239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.836266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.836406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.836436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.836537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.836564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.836658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.836688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.836803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.836831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.836919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.836945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.837034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.837077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.837198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.837232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.837357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.837387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.837478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.837507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.837718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.837748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.837913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.837945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.838053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.838091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.838200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.838232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.838338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.838368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.838542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.838602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.838795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.838849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.838981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.839017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.839126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.839153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.839253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.839283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.839396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.839441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.839574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.839619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.839777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.839835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.839946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.839973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.840113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-10-17 16:55:07.840144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-10-17 16:55:07.840365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.840417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.840621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.840652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.840753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.840797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.840913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.840940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.841050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.841077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.841215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.841244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.841393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.841423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.841534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.841564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.841717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.841749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.841901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.841929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.842010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.842037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.842152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.842181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.842311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.842356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.842500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.842544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.842634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.842661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.842798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.842826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.842941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.842968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.843083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.843130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.843231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.843259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.843341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.843369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.843478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.843506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.843623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.843650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.843741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.843769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.843890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.843917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.844055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.844096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.844198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.844227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.844341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.844375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.844489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.844516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.844629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.844656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.844774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.844800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.844886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.844913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.845040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-10-17 16:55:07.845081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-10-17 16:55:07.845237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.845278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.845452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.845484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.845627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.845654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.845793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.845836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.845921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.845949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.846036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.846063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.846179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.846206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.846304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.846331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.846429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.846456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.846572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.846598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.846721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.846763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.846868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.846897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.847007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.847066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.847194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.847224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.847381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.847411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.847528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.847558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.847684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.847715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.847878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.847910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.848017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.848061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.848154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.848181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.848319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.848346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.848535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.848596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.848698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.848725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.848856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.848885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.849022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.849049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.849161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.849194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.849317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.849367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.849486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.849515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.849665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.849695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.849815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.849842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-10-17 16:55:07.849980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-10-17 16:55:07.850016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.850134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.850161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.850281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.850307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.850464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.850509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.850630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.850676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.850834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.850865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.850966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.851015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.851163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.851190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.851275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.851303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.851449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.851480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.851673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.851703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.851835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.851865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.851990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.852030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.852141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.852169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.852298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.852330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.852408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.852452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.852562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.852589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.852721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.852748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.852910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.852960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.853114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.853144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.853260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.853289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.853405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.853432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.853536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.853603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.853852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.853905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.854027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.854072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.854160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.854187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.854298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.854325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.854411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.854439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.854545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.854576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.854672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.854704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.854892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.854934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.855063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.855104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.855232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-10-17 16:55:07.855259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-10-17 16:55:07.855377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.855404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.855512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.855539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.855658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.855688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.855834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.855880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.856019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.856067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.856158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.856188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.856328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.856374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.856508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.856554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.856695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.856748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.856865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.856893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.857011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.857040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.857133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.857160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.857281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.857309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.857449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.857476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.857587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.857614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.857724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.857751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.857849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.857890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.857987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.858040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.858167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.858198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.858324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.858354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.858584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.858637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.858819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.858869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.858993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.859025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.859145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.859172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.859302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.859332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.859541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.859575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.859672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-10-17 16:55:07.859702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-10-17 16:55:07.859851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.859882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.860021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.860048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.860163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.860190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.860320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.860350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.860470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.860514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.860675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.860705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.860851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.860883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.861017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.861046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.861136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.861165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.861297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.861326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.861451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.861483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.861577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.861608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.861760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.861787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.861964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.861991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.862127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.862154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.862294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.862339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.862486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.862512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.862663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.862692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.862816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.862846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.862982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.863019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.863136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.863162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.863255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.863283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.863424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.863454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.863584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.863615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.863741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.863770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.863934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.863975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.864088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.864117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.864244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.864271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.864387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.864430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.864613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.864666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-10-17 16:55:07.864799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-10-17 16:55:07.864829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.864922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.864953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.865089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.865130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.865215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.865243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.865329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.865372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.865481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.865546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.865722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.865771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.865860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.865889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.866053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.866086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.866177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.866219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.866353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.866385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.866512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.866543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.866638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.866669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.866792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.866837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.866951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.866977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.867097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.867124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.867240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.867267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.867363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.867390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.867506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.867537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.867639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.867666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.867805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.867834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.867924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.867954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.868093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.868134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.868284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.868314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.868447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.868492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.868688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.868718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.868828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.868855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.869016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.869061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.869163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.869194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.869439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.869470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.869704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.869760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.869854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.869882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.869999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-10-17 16:55:07.870032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-10-17 16:55:07.870150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.870177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.870343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.870394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.870593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.870651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.870818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.870864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.870975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.871007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.871121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.871147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.871264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.871308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.871451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.871484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.871599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.871627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.871736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.871763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.871851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.871880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.872013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.872041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.872134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.872161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.872279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.872307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.872397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.872424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.872618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.872646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.872744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.872771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.872889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.872916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.873039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.873086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.873203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.873232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.873411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.873438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.873620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.873676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.873806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.873833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.873943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.873969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.874090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.874117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.874256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.874286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.874414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.874443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.874539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.874568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.874683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.874713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.874843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-10-17 16:55:07.874888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-10-17 16:55:07.874987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.875023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.875141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.875169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.875277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.875307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.875432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.875461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.875548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.875577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.875701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.875731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.875867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.875912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.876064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.876092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.876209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.876239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.876362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.876391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.876513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.876542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.876698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.876746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.876832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.876860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.876951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.876979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.877073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.877100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.877242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.877271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.877364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.877394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.877547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.877577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.877771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.877818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.877916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.877947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.878038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.878066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.878181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.878208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.878372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.878402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.878526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.878555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.878639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.878671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.878851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.878900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.879016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.879068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.879158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.879203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.879303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.879334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.879457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.879486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.879613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-10-17 16:55:07.879643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-10-17 16:55:07.879768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.879797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.879905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.879935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.880063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.880111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.880247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.880275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.880432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.880476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.880563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.880592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.880740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.880768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.880871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.880911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.881009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.881050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.881210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.881241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.881366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.881397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.881564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.881611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.881807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.881835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.881949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.881975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.882080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.882111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.882203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.882232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.882349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.882394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.882532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.882578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.882802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.882856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.882958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.883009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.883143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.883187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.883305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.883333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.883431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.883459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.883545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.883573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.883662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.883689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.883794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.883835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.883958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.883984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.884124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.884152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-10-17 16:55:07.884237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-10-17 16:55:07.884265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.884406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.884432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.884542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.884569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.884687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.884714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.884799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.884826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.884919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.884946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.885107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.885155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.885297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.885343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.885490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.885534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.885669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.885700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.885826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.885856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.885977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.886015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.886116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.886142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.886251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.886280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.886402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.886432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.886524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.886556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.886671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.886718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.886864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.886895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.887035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.887063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.887176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.887203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.887338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.887370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.887561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.887622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.887749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.887779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.887880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.887906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.888026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.888053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.888193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.888220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.888333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.888360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.888474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.888520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.888614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.888643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.888799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.888828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.888956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.888988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.889137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.889166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.889306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.889354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.889525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.889588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.889782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.889834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.889950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.889978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.890124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.890152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-10-17 16:55:07.890292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-10-17 16:55:07.890322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.890413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.890443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.890538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.890569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.890678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.890707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.890874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.890915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.891041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.891070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.891181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.891208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.891334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.891364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.891538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.891591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.891828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.891881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.892017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.892067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.892178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.892205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.892323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.892349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.892454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.892483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.892605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.892642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.892798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.892828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.892952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.892981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.893105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.893132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.893222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.893268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.893384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.893413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.893537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.893567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.893657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.893685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.893809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.893839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.893946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.893973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.894077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.894105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.894181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.894206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.894349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.894376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.894466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.894493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.894575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.894602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.894756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.894816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.894961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.894990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.895097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.895126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.895231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.895262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.895438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.895483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.895574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.895602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.895717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.895757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.895869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.895899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.896035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.896098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.896290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.896362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.896563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.896621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.896842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-10-17 16:55:07.896896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-10-17 16:55:07.897036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.897065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.897210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.897238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.897351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.897397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.897595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.897625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.897809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.897863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.897994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.898043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.898136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.898164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.898269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.898299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.898461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.898492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.898676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.898729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.898868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.898905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.899067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.899103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.899190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.899217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.899352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.899382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.899544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.899573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.899696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.899725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.899883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.899912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.900033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.900061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.900172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.900199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.900291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.900318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.900479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.900509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.900615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.900641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.900750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.900780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.900937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.900969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.901085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.901112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.901198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.901223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.901349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.901379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.901474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.901503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.901629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.901659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.901746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.901775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.901926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.901955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.902067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.902094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.902169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.902195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.902348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.902377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.902499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.902528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.902652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.902680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.902776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.902805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.902939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.902969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.903091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.903132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.903285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.903331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.903447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-10-17 16:55:07.903479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-10-17 16:55:07.903606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.903635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.903725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.903755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.903903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.903933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.904103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.904131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.904248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.904278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.904389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.904420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.904516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.904544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.904706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.904752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.904895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.904935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.905060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.905095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.905207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.905235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.905360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.905392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.905613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.905670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.905803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.905833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.905991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.906052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.906178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.906223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.906323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.906355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.906462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.906492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.906599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.906641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.906750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.906781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.906907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.906941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.907057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.907085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.907227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.907255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.907402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.907432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.907617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.907648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.907773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.907803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.907928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.907973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.908116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.908156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.908275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.908302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.908404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.908434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.908555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.908623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.908853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.908910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.909047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.909075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.909187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.909214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.909311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.909338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.909418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.909461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.909640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.909670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.909799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.909830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.909961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.909992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.910144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.910172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.910288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-10-17 16:55:07.910316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-10-17 16:55:07.910456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.910482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.910644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.910674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.910795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.910825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.910974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.911013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.911142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.911169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.911264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.911291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.911453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.911483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.911674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.911704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.911860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.911888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.912025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.912072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.912181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.912208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.912309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.912351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.912564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.912618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.912849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.912907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.913071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.913098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.913244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.913271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.913433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.913463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.913596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.913626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.913749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.913778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.913931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.913973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.914114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.914155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.914304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.914349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.914467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.914504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.914652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.914683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.914799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-10-17 16:55:07.914829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-10-17 16:55:07.914984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.915023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.915179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.915221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.915411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.915477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.915570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.915600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.915768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.915822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.915957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.915997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.916110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.916139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.916253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.916298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.916385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.916415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.916507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.916538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.916666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.916709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.916837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.916866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.916994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.917041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.917159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.917188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.917393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.917423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.917554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.917584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.917752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.917816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.917917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.917945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.918037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.918063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.918155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.918184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.918379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.918410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.918538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.918579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.918778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.918809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.918972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.919040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.919174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.919202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.919332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.919372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.919496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.919525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.919688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.919747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.919857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.919885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.920036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.920066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.920184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.920212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.920410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.920452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.920690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.920745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-10-17 16:55:07.920894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-10-17 16:55:07.920921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.921016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.921043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.921153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.921181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.921290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.921318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.921403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.921435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.921535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.921562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.921771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.921801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.921929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.921959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.922095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.922122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.922214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.922243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.922353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.922380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.922536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.922583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.922691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.922735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.922842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.922869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.922953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.922980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.923067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.923092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.923191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.923232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.923372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.923403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.923558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.923586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.923726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.923790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.923971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.924020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.924169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.924201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.924375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.924431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.924589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.924654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.924876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.924930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.925038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.925085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.925197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.925225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.925368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.925397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.925514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.925543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.925723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.925778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.925910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.925937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.926043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.926084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.926203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.926232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.926404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.926448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.926671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.926728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.926826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-10-17 16:55:07.926871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-10-17 16:55:07.927065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.927094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.927177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.927224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.927346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.927377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.927501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.927531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.927657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.927689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.927775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.927804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.927909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.927936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.928057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.928085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.928179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.928206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.928316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.928348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.928499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.928529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.928653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.928682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.928872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.928931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.929057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.929086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.929208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.929235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.929345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.929372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.929516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.929584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.929705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.929735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.929833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.929864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.930009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.930040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.930156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.930185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.930319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.930349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.930510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.930555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.930683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.930713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.930822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.930852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.931048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.931076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.931195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.931222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.931368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.931396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.931510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.931539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.931620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.931646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.931752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.931782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.931899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.931944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.932115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.932156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.932278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.932307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-10-17 16:55:07.932426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-10-17 16:55:07.932493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.932665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.932736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.932880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.932908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.933038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.933066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.933196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.933247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.933404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.933449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.933626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.933682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.933807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.933835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.933975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.934013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.934119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.934149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.934297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.934343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.934473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.934518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.934646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.934674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.934813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.934840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.934971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.935020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.935123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.935153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.935275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.935303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.935386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.935413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.935500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.935527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.935639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.935667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.935771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.935803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.935935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.935964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.936085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.936131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.936264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.936296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.936422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.936452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.936620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.936675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.936772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.936803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.936897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.936927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.937070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.937100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.937206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.937236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.937382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.937428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.937587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.937632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.937776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.937803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.937922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.937950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.938061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.938090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.938210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.938239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.938323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.938350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-10-17 16:55:07.938440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-10-17 16:55:07.938467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.938558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.938586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.938701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.938730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.938815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.938840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.938930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.938964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.939065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.939093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.939206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.939233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.939337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.939367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.939484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.939513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.939671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.939701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.939795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.939825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.939949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.939980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.940130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.940157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.940263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.940293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.940387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.940417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.940550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.940579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.940709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.940742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.940864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.940906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.941040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.941071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.941184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.941212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.941343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.941376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.941475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.941505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.941594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.941624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.941753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.941783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.941886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.941931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.942034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.942081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.942223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.942251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.942376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.942406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.942560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.942590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-10-17 16:55:07.942680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-10-17 16:55:07.942709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.942839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.942869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.943008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.943059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.943143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.943189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.943321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.943353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.943450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.943480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.943606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.943638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.943746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.943775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.943950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.943992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.944106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.944136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.944234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.944262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.944479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.944537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.944769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.944824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.944969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.945020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.945157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.945185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.945273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.945298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.945393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.945421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.945654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.945720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.945829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.945886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.946081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.946122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.946245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.946290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.946460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.946516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.946652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.946718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.946868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.946898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.947042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.947069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.947191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.947228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.947411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.947457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.947661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.947694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.947812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.947843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.947949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.947980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.948104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.948132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-10-17 16:55:07.948281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-10-17 16:55:07.948309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.948417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.948447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.948637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.948667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.948781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.948823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.948963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.948993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.949111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.949139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.949255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.949283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.949364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.949409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.949528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.949558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.949655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.949685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.949808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.949838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.949998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.950033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.950132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.950174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.950292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.950323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.950420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.950451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.950603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.950634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.950735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.950765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.950868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.950898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.950994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.951047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.951185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.951212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.951326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.951353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.951491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.951534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.951673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.951700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.951887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.951917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.952057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.952084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.952213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.952254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.952449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.952481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.952635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.952666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.952767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.952800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.952906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.952933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.953056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-10-17 16:55:07.953085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-10-17 16:55:07.953195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.953223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.953311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.953339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.953428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.953471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.953559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.953590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.953751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.953782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.953908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.953938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.954043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.954089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.954212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.954243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.954331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.954359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.954470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.954497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.954590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.954618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.954708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.954735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.954855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.954883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.955013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.955055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.955178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.955208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.955315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.955362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.955469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.955501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.955652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.955697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.955781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.955810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.955914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.955956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.956166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.956197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.956320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.956349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.956466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.956494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.956607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.956634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.956737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.956782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.956875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.956902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.957013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.957041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.957171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.957201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.957324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.957354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.957444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.957473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.957622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.957652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.957748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.957794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.957891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.957932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.958056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.958105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.958243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.958295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.958424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-10-17 16:55:07.958498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-10-17 16:55:07.958598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.958626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.958742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.958769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.958906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.958947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.959096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.959129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.959246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.959276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.959415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.959469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.959630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.959679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.959863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.959894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.960043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.960074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.960226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.960261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.960416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.960448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.960600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.960631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.960735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.960767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.960866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.960898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.961007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.961054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.961168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.961211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.961330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.961357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.961440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.961486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.961570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.961598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.961689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.961718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.961831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.961860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.961959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.961984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.962099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.962126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.962213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.962239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.962315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.962358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.962453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.962492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.962629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.962661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.962798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.962828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.962960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.962990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.963113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.963141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.963250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.963279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.963432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.963462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.963592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.963622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.963790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.963850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.963946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.963977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.964104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.964133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.964224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.964271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.964365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.964395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.964493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.964523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.964731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-10-17 16:55:07.964793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-10-17 16:55:07.964917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.964947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.965052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.965079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.965192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.965220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.965326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.965357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.965469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.965496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.965649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.965681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.965836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.965866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.965991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.966030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.966163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.966189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.966296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.966327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.966441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.966486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.966610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.966639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.966727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.966758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.966881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.966911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.967030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.967074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.967194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.967221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.967313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.967340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.967427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.967455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.967619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.967648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.967763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.967806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.967894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.967923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.968018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.968063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.968144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.968169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.968286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.968312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.968439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.968468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.968618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.968648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.968771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.968802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.968909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.968939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.969052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.969079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.969166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.969192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.969280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-10-17 16:55:07.969307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-10-17 16:55:07.969403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.969432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.969524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.969555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.969654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.969699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.969872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.969902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.969992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.970044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.970174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.970215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.970318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.970359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.970475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.970523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.970648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.970679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.970783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.970813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.970932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.970975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.971094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.971122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.971216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.971242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.971349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.971378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.971482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.971510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.971663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.971706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.971857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.971887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.971985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.972024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.972122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.972153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.972245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.972292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.972399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.972429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.972547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.972577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.972751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.972796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.972948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.972991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.973095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.973123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.973215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.973243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.973329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.973372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.973495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.973525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.973619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.973649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.973773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.973804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.973940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-10-17 16:55:07.973971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-10-17 16:55:07.974111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.974139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.974254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.974298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.974401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.974446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.974611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.974642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.974788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.974834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.974995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.975034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.975168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.975195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.975282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.975327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.975512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.975542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.975733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.975789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.975913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.975942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.976043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.976086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.976191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.976220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.976313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.976343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.976431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.976460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.976566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.976595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.976714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.976744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.976835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.976863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.977020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.977062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.977189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.977222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.977345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.977375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.977531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.977561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.977651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.977682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.977805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.977836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.977965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.977996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.978125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.978156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.978294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.978338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.978458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.978529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.978732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.978783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.978902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.978942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.979053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.979082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.979208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.979235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.979350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.979380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.979562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-10-17 16:55:07.979625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-10-17 16:55:07.979714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.979743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.979839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.979870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.979968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.979999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.980117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.980145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.980243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.980285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.980426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.980473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.980614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.980660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.980771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.980817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.980909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.980937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.981032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.981061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.981152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.981187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.981284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.981314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.981428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.981456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.981575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.981604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.981724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.981751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.981861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.981902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.982073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.982106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.982230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.982261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.982424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.982482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.982633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.982681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.982809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.982836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.982962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.982989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.983146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.983176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.983328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.983357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.983487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.983517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.983753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.983802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.983890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.983934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.984047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.984075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.984166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.984194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.984324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.984353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.984447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.984478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.984562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-10-17 16:55:07.984589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-10-17 16:55:07.984726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.984756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.984864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.984892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.984984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.985017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.985098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.985123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.985235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.985266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.985367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.985403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.985503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.985533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.985706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.985755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.985875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.985902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.985989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.986023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.986142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.986170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.986319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.986348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.986438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.986465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.986556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.986584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.986705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.986735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.986859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.986902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.987012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.987039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.987122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.987150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.987244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.987272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.987413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.987443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.987600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.987630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.987765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.987795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.987892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.987936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.988067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.988095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.988193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.988220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.988304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.988349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.988460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.988488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.988626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.988655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.988776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.988806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.988920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.988961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.989075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.989106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.989243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.989290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.989463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.989509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.989673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.989716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.989846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.989873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.990031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.990063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.990168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.990197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-10-17 16:55:07.990305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-10-17 16:55:07.990352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.990481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.990527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.990647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.990675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.990763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.990791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.990884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.990913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.991023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.991051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.991141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.991169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.991264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.991290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.991401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.991432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.991537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.991567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.991657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.991701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.991846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.991873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.991988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.992021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.992146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.992192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.992303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.992333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.992502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.992548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.992630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.992658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.992748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.992776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.992917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.992945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.993079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.993126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.993305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.993333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.993419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.993448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.993543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.993572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.993713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.993741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.993822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.993849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.993955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.993997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.994135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.994164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-10-17 16:55:07.994299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-10-17 16:55:07.994329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.994450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.994481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.994599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.994629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.994737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.994767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.994870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.994899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.994989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.995029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.995150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.995178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.995308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.995339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.995465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.995497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.995621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.995653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.995778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.995808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.995911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.995942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.996063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.996091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.996211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.996240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.996369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.996400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.996546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.996592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.996703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.996748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.996860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.996888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.997042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.997083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.997183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.997212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.997304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.997331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.997417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.997450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.997592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.997619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.997704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.997731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.997818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.997844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.997934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.997960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.998076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.998104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.998206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.998236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.998355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.998385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.998480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.998511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.998666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.998699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.998857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.998884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.999012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.999040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.999127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.999155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.999273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.999301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.999463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.999504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.999616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.999644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.449 qpair failed and we were unable to recover it. 00:26:54.449 [2024-10-17 16:55:07.999728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.449 [2024-10-17 16:55:07.999755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:07.999839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:07.999865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:07.999978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.000011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.000148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.000178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.000298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.000327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.000445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.000475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.000594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.000624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.000796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.000823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.000938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.000970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.001087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.001128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.001270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.001317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.001470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.001530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.001741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.001796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.001914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.001942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.002043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.002074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.002174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.002202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.002309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.002339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.002547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.002609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.002732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.002759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.002848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.002875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.002966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.002993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.003116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.003143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.003276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.003317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.003410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.003437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.003585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.003618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.003706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.003734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.003844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.003871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.003996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.004065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.004203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.004231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.004349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.004378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.004458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.004486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.004622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.004649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.004734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.004762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.004852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.004881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.005015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.005057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.005196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.005228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.005452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.005482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.005579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.005611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.005703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.005733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.005894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.005922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.006039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.006067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.006181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.006208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.450 qpair failed and we were unable to recover it. 00:26:54.450 [2024-10-17 16:55:08.006320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.450 [2024-10-17 16:55:08.006350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.006475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.006538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.006630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.006660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.006756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.006800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.006949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.006978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.007123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.007150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.007264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.007291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.007416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.007446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.007568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.007598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.007695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.007730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.007821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.007851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.007950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.008016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.008118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.008148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.008288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.008322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.008418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.008450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.008553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.008583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.008688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.008714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.008796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.008823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.008950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.008992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.009147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.009176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.009266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.009293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.009470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.009500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.009687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.009716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.009858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.009889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.010019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.010064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.010187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.010215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.010352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.010385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.010479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.010510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.010661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.010691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.010813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.010844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.011008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.011036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.011164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.011192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.011269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.011297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.011494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.011524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.011694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.011723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.011845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.011876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.011991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.012063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.451 [2024-10-17 16:55:08.012213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.451 [2024-10-17 16:55:08.012273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.451 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.012451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.012522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.012705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.012762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.012851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.012880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.012995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.013029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.013165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.013211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.013301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.013328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.013467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.013494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.013635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.013662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.013761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.013788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.013879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.013920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.014010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.014039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.014157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.014189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.014352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.014383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.014483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.014512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.014746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.014795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.014916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.014945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.015058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.015086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.015170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.015214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.015341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.015372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.015500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.015531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.015632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.015663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.015818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.015848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.016069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.016097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.016226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.016256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.016405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.016435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.016539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.016569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.016686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.016716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.016804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.016850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.017043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.017071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.017182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.017225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.017350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.017381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.017500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.017530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.017631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.017661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.017778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.017808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.017957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.017987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.018101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.018128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.018210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.018237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.018324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.018352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.018495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.018540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.452 qpair failed and we were unable to recover it. 00:26:54.452 [2024-10-17 16:55:08.018698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.452 [2024-10-17 16:55:08.018729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.018918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.018948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.019094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.019122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.019245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.019287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.019468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.019530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.019646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.019690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.019809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.019839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.019974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.020022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.020134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.020160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.020280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.020307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.020400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.020445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.020574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.020603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.020702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.020732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.020859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.020889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.021014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.021043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.021148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.021175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.021271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.021316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.021414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.021440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.021540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.021569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.021685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.021714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.021866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.021896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.022031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.022073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.022191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.022220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.022332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.022363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.022513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.022558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.022717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.022766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.022882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.022909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.023022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.023050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.023151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.023179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.023289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.023316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.023428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.453 [2024-10-17 16:55:08.023455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.453 qpair failed and we were unable to recover it. 00:26:54.453 [2024-10-17 16:55:08.023542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.023570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.023688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.023715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.023812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.023840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.023927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.023954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.024070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.024097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.024177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.024204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.024317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.024344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.024476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.024541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.024633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.024662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.024793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.024823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.024946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.024973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.025094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.025122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.025233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.025259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.025392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.025422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.025583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.025612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.025711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.025754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.025860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.025889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.025979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.026021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.026153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.026180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.026295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.026322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.026462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.026491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.026604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.026634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.026747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.026789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.026925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.026951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.027073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.027101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.027194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.027221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.027332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.027359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.027500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.027526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.027618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.027644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.027785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.027815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.027977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.028011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.028125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.028151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.028248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.028275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.028389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.028415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.028513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.028542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.028638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.028668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.028798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.028828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.028871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b31ff0 (9): Bad file descriptor 00:26:54.454 [2024-10-17 16:55:08.029036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.029077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.029196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.454 [2024-10-17 16:55:08.029225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.454 qpair failed and we were unable to recover it. 00:26:54.454 [2024-10-17 16:55:08.029330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.029376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.029509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.029555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.029675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.029720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.029832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.029859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.030008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.030035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.030149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.030177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.030260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.030286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.030402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.030429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.030515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.030542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.030629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.030675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.030806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.030835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.030949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.030979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.031140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.031180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.031275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.031305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.031416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.031461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.031567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.031599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.031743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.031788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.031903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.031930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.032016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.032044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.032132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.032159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.032298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.032325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.032416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.032443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.032557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.032583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.032709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.032737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.032849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.032876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.032957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.032984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.033074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.033102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.033192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.033219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.033296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.033323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.033436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.033463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.033574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.033601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.033703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.033743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.033870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.033899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.034030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.034071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.034189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.034218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.034329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.034357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.034472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.034504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.034596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.034625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.455 qpair failed and we were unable to recover it. 00:26:54.455 [2024-10-17 16:55:08.034708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.455 [2024-10-17 16:55:08.034737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.034866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.034909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.035046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.035077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.035175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.035205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.035327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.035357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.035477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.035506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.035635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.035664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.035801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.035828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.035943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.035975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.036077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.036105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.036235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.036265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.036392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.036424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.036537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.036568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.036740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.036791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.036907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.036935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.037068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.037114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.037228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.037257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.037382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.037409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.037532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.037559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.037672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.037699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.037827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.037868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.037998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.038034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.038125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.038152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.038265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.038291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.038399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.038426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.038538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.038569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.038701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.038731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.038819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.038848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.038976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.039013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.039170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.039196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.039287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.039313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.039445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.039475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.039582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.039612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.039737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.039766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.039911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.039938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.040031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.040059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.040148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.040175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.040287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.040329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.040527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.040558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.040660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.040689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.040793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.040823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.040956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.040983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.041072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.041100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.456 qpair failed and we were unable to recover it. 00:26:54.456 [2024-10-17 16:55:08.041210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.456 [2024-10-17 16:55:08.041237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.041376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.041406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.041557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.041587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.041698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.041725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.041967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.041996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.042136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.042163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.042270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.042315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.042442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.042471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.042590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.042619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.042739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.042773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.042879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.042906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.043032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.043060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.043178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.043205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.043283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.043327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.043431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.043462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.043584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.043614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.043745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.043775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.043900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.043929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.044064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.044092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.044223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.044264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.044405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.044453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.044585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.044617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.044797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.044842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.044967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.044995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.045124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.045152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.045268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.045295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.045387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.045417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.045513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.045540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.045669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.045697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.045806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.045833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.045939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.045966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.046075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.046104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.046204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.046233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.046346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.046375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.046560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.046610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.046773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.046818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.046947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.046993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.047137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.047166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.047326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.047356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.047459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.047503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.457 [2024-10-17 16:55:08.047631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.457 [2024-10-17 16:55:08.047662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.457 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.047752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.047782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.047945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.047971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.048094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.048122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.048214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.048242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.048367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.048396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.048512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.048542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.048641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.048683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.048795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.048825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.048964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.048990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.049134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.049161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.049275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.049317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.049408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.049437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.049566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.049596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.049689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.049718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.049838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.049866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.049966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.049997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.050129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.050156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.050262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.050288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.050419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.050448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.050564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.050608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.050700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.050730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.050846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.050875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.051020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.051052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.051170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.051198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.051313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.051358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.051512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.051541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.051672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.051701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.458 [2024-10-17 16:55:08.051799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.458 [2024-10-17 16:55:08.051828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.458 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.052009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.052050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.052178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.052207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.052369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.052414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.052505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.052534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.052709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.052762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.052873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.052900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.053057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.053088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.053194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.053239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.053362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.053395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.053523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.053556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.053677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.053707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.053832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.053862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.053992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.054026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.054170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.054197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.054341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.054368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.054570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.054600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.054690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.054719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.054865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.054892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.054972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.054998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.055128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.055157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.055238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.055265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.055366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.055400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.055521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.055550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.055651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.055680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.055842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.055902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.056053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.056082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.056220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.056252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.056371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.056402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.056530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.056563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.056681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.056711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.056806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.056837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.056958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.056984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.057128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.057155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.057245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.057272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.057376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.057407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.057565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.057595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.057688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.057718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.057809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.057840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.057984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.058025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.058130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.459 [2024-10-17 16:55:08.058157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.459 qpair failed and we were unable to recover it. 00:26:54.459 [2024-10-17 16:55:08.058279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.058309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.058435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.058466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.058556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.058586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.058704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.058752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.058864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.058891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.059007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.059034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.059145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.059172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.059288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.059316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.059418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.059447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.059568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.059596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.059714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.059740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.059850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.059877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.059969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.059998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.060096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.060124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.060218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.060263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.060395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.060425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.060551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.060581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.060726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.060771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.060914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.060943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.061088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.061117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.061261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.061291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.061536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.061572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.061706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.061734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.061852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.061878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.061986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.062019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.062119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.062148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.062243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.062271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.062392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.062420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.062511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.062538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.062625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.062654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.062753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.062793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.062883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.062911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.063059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.063087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.063191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.063218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.063302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.063328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.063426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.063454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.063573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.063600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.063697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.063724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.063841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.063868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.064008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.460 [2024-10-17 16:55:08.064038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.460 qpair failed and we were unable to recover it. 00:26:54.460 [2024-10-17 16:55:08.064122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.461 [2024-10-17 16:55:08.064149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.461 qpair failed and we were unable to recover it. 00:26:54.461 [2024-10-17 16:55:08.064243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.461 [2024-10-17 16:55:08.064271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.461 qpair failed and we were unable to recover it. 00:26:54.461 [2024-10-17 16:55:08.064372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.461 [2024-10-17 16:55:08.064403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.461 qpair failed and we were unable to recover it. 00:26:54.751 [2024-10-17 16:55:08.064554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.751 [2024-10-17 16:55:08.064584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.751 qpair failed and we were unable to recover it. 00:26:54.751 [2024-10-17 16:55:08.064681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.751 [2024-10-17 16:55:08.064713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.751 qpair failed and we were unable to recover it. 00:26:54.751 [2024-10-17 16:55:08.064828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.751 [2024-10-17 16:55:08.064856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.751 qpair failed and we were unable to recover it. 00:26:54.751 [2024-10-17 16:55:08.064952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.751 [2024-10-17 16:55:08.064979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.751 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.065084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.065130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.065269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.065302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.065435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.065464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.065556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.065586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.065695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.065729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.065829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.065855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.065945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.065974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.066108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.066135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.066216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.066242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.066403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.066433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.066557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.066587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.066703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.066729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.066843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.066874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.067008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.067055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.067166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.067193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.067326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.067356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.067452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.067482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.067577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.067607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.067756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.067786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.067960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.067989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.068084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.068110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.068217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.068244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.068340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.068366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.068498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.068528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.068642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.068669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.068779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.068817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.068934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.068964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.069121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.069163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.069300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.069342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.069489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.069521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.069614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.069646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.069773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.069804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.069959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.069988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.070108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.070136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.070250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.070277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.752 [2024-10-17 16:55:08.070370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.752 [2024-10-17 16:55:08.070418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.752 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.070558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.070584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.070757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.070787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.070892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.070921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.071015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.071059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.071188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.071220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.071350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.071397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.071540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.071586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.071804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.071854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.071973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.072006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.072123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.072168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.072260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.072287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.072426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.072454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.072581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.072622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.072737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.072765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.072909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.072940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.073054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.073081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.073220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.073246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.073334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.073380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.073485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.073514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.073620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.073650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.073745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.073775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.073883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.073912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.074051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.074097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.074260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.074292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.074384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.074415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.074510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.074541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.074664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.074694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.074798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.074828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.074939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.074967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.075068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.075096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.075180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.075207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.075294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.075321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.075460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.075489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.075596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.075626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.075781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.075810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.753 qpair failed and we were unable to recover it. 00:26:54.753 [2024-10-17 16:55:08.075900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.753 [2024-10-17 16:55:08.075929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.076067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.076095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.076180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.076210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.076322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.076368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.076501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.076546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.076647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.076677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.076811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.076839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.076957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.076987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.077092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.077119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.077273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.077300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.077423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.077450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.077595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.077621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.077733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.077776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.077884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.077910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.078012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.078040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.078156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.078183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.078320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.078349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.078489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.078519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.078644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.078673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.078767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.078795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.078890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.078919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.079057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.079084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.079197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.079225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.079305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.079331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.079416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.079443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.079597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.079626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.079751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.079781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.079951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.079978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.080108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.080148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.080234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.080263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.080397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.080445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.080629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.080692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.080808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.080852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.080938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.080965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.081125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.081152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.081244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.081270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.081386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.081414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.081551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.081578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.081699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.081725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.754 qpair failed and we were unable to recover it. 00:26:54.754 [2024-10-17 16:55:08.081807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.754 [2024-10-17 16:55:08.081834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.081917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.081945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.082054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.082082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.082158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.082185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.082303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.082330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.082438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.082464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.082639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.082685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.082865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.082896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.083078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.083120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.083248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.083292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.083406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.083451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.083683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.083713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.083817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.083851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.084010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.084037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.084150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.084177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.084309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.084339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.084445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.084473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.084619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.084650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.084776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.084807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.084957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.085006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.085100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.085129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.085251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.085289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.085425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.085471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.085550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.085575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.085657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.085685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.085793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.085821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.085995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.086029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.086142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.086170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.086282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.086310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.086419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.086447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.086564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.086593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.086697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.086737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.086839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.755 [2024-10-17 16:55:08.086868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.755 qpair failed and we were unable to recover it. 00:26:54.755 [2024-10-17 16:55:08.086999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.087033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.087124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.087152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.087228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.087255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.087405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.087431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.087541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.087572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.087802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.087862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.087989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.088056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.088183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.088212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.088343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.088388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.088530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.088576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.088711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.088757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.088842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.088869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.088988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.089023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.089117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.089145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.089240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.089268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.089385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.089414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.089528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.089556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.089665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.089692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.089782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.089809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.089937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.089978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.090133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.090181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.090341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.090386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.090494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.090525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.090661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.090689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.090805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.090832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.090927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.090955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.091086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.091126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.091253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.091281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.091359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.091386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.091496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.091523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.091633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.091659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.091779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.091806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.091924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.091952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.092083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.756 [2024-10-17 16:55:08.092123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.756 qpair failed and we were unable to recover it. 00:26:54.756 [2024-10-17 16:55:08.092254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.092313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.092484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.092515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.092669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.092699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.092843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.092871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.092965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.092994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.093145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.093174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.093304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.093335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.093458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.093488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.093645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.093698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.093834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.093876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.094022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.094049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.094163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.094188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.094274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.094318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.094512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.094541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.094653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.094683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.094804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.094834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.094971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.094997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.095096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.095123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.095238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.095265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.095411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.095438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.095534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.095561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.095737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.095783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.095974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.096028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.096142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.096169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.096268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.096315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.096412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.096444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.096603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.096647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.096787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.096824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.096958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.097007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.097135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.097166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.097258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.097287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.757 qpair failed and we were unable to recover it. 00:26:54.757 [2024-10-17 16:55:08.097395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.757 [2024-10-17 16:55:08.097426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.097531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.097560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.097787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.097850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.098007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.098035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.098181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.098227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.098382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.098411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.098535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.098563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.098656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.098683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.098782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.098831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.098989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.099044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.099192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.099224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.099385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.099416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.099508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.099539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.099715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.099781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.099877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.099904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.100043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.100085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.100225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.100272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.100406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.100439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.100640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.100704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.100825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.100855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.100974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.101008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.101130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.101158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.101262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.101308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.101424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.101452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.101623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.101655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.101794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.101839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.101985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.102022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.102137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.102164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.102255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.102282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.102394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.102425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.102553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.102584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.102712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.102742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.102894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.102935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.103041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.103069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.103161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.103190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.103309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.103342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.103508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.103570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.103764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.103795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.103943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.103972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.104113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.104140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.104221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.758 [2024-10-17 16:55:08.104248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.758 qpair failed and we were unable to recover it. 00:26:54.758 [2024-10-17 16:55:08.104364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.104393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.104574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.104622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.104770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.104800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.104887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.104916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.105083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.105124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.105225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.105272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.105383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.105413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.105578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.105633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.105763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.105794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.105889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.105919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.106071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.106099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.106228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.106258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.106401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.106449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.106566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.106594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.106715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.106744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.106870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.106910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.107033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.107061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.107157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.107186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.107275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.107302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.107386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.107429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.107552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.107582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.107717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.107749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.107889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.107919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.108042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.108071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.108155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.108182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.108307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.108338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.108463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.108506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.108620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.108651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.108778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.108808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.108929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.108956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.109044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.109070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.109171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.109200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.109304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.109334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.109461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.109490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.109604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.109633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.109792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.109820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.109935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.109963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.110061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.110087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.110168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.110195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.759 [2024-10-17 16:55:08.110312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.759 [2024-10-17 16:55:08.110340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.759 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.110423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.110450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.110559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.110604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.110712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.110739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.110816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.110842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.110985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.111020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.111164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.111194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.111326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.111356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.111496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.111525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.111628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.111657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.111771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.111798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.111891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.111919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.112038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.112065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.112150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.112178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.112321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.112349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.112429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.112458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.112548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.112575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.112681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.112708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.112819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.112846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.112969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.112998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.113129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.113156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.113254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.113285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.113433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.113496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.113599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.113626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.113771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.113798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.113908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.113935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.114066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.114129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.114323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.114354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.114487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.114551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.114648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.114678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.114840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.114871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.114961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.114988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.115118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.115149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.115298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.115364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.115593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.115649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.115806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.115860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.115965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.115993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.116091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.116118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.116259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.116305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.116444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.116474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.116609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.116688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.760 [2024-10-17 16:55:08.116813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.760 [2024-10-17 16:55:08.116844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.760 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.116998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.117056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.117179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.117208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.117291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.117319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.117408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.117435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.117580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.117611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.117764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.117822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.117908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.117936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.118039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.118067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.118216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.118248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.118358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.118428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.118617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.118648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.118810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.118873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.119012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.119058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.119161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.119191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.119292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.119321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.119416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.119445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.119599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.119628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.119724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.119755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.119845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.119876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.119991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.120028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.120117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.120144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.120237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.120264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.120382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.120442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.120626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.120658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.120769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.120810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.120936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.120964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.121122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.121167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.121363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.121391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.121575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.121631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.121798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.121859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.122008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.122037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.122119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.122155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.122233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.122277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.122377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.122408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.122569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.761 [2024-10-17 16:55:08.122600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.761 qpair failed and we were unable to recover it. 00:26:54.761 [2024-10-17 16:55:08.122731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.122763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.122877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.122907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.122995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.123029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.123158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.123190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.123333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.123379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.123465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.123493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.123743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.123803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.123926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.123971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.124105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.124136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.124237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.124267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.124377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.124405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.124544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.124576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.124748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.124802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.124923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.124950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.125109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.125154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.125272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.125304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.125396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.125439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.125601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.125631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.125805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.125880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.125974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.126028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.126143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.126171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.126285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.126313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.126452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.126479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.126557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.126585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.126731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.126783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.126866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.126894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.126997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.127030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.127183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.127216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.127341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.127371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.127497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.127527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.127653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.127684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.127781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.127811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.127953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.127982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.128111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.128140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.128250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.128296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.128468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.128496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.128585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.128613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.128695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.128722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.128812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.128840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.128959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.128987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.129094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-10-17 16:55:08.129121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.762 qpair failed and we were unable to recover it. 00:26:54.762 [2024-10-17 16:55:08.129209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.129236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.129343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.129370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.129483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.129511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.129618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.129646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.129792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.129825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.129975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.130026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.130113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.130140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.130282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.130312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.130440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.130471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.130593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.130623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.130711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.130741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.130853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.130887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.131011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.131041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.131157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.131185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.131301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.131346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.131430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.131458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.131693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.131751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.131842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.131869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.131983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.132023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.132111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.132139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.132232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.132261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.132379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.132407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.132490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.132517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.132631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.132659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.132768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.132796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.132931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.132972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.133125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.133155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.133290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.133334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.133576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.133607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.133728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.133777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.133865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.133893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.134013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.134043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.134130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.134158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.134242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.134269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.134386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.134414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.134513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.134542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.134636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.134664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.134749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.134776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.134873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.134914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.135028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.135070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.763 qpair failed and we were unable to recover it. 00:26:54.763 [2024-10-17 16:55:08.135176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-10-17 16:55:08.135216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.135367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.135396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.135512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.135540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.135654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.135681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.135837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.135867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.135960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.135990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.136109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.136155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.136273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.136303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.136403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.136433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.136539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.136566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.136678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.136708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.136839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.136875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.137015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.137061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.137201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.137248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.137386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.137430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.137572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.137617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.137711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.137737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.137859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.137888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.137974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.138008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.138093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.138120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.138260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.138287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.138412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.138459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.138580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.138609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.138725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.138753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.138893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.138933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.139035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.139065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.139200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.139230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.139326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.139357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.139468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.139497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.139626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.139681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.139794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.139821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.139946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.139975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.140117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.140164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.140310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.140338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.140454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.140481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.140566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.140593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.140705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.140734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.140841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.140883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.140985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.141032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.141149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.141179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.764 qpair failed and we were unable to recover it. 00:26:54.764 [2024-10-17 16:55:08.141296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-10-17 16:55:08.141326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.141413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.141443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.141537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.141565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.141676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.141707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.141868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.141899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.142031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.142076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.142190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.142221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.142339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.142385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.142484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.142516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.142687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.142734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.142852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.142881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.143045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.143077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.143182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.143213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.143360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.143391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.143528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.143556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.143668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.143712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.143846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.143892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.144036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.144064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.144181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.144208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.144332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.144362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.144486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.144516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.144627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.144658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.144817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.144848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.145013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.145041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.145158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.145185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.145279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.145307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.145423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.145484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.145620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.145679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.145823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.145851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.145932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.145958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.146075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.146107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.146261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.146305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.146416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.146443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.146561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.146590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.146715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.146744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.146854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.146881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.146999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.765 [2024-10-17 16:55:08.147037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.765 qpair failed and we were unable to recover it. 00:26:54.765 [2024-10-17 16:55:08.147135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.147162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.147312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.147339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.147457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.147484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.147621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.147670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.147797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.147842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.147975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.148027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.148135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.148166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.148312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.148356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.148547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.148579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.148735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.148766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.148886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.148930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.149048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.149075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.149189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.149216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.149325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.149355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.149487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.149517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.149628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.149658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.149746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.149775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.149903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.149930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.150013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.150041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.150128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.150155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.150253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.150284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.150410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.150440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.150578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.150607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.150732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.150763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.150896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.150923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.151017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.151042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.151156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.151183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.151265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.151291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.151445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.151513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.151676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.151707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.151833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.151862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.151969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.152022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.152115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.152142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.152222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.152248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.152343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.152372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.152477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.152504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.152622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.152666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.152810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.152842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.152995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.153048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.153141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.153170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.153291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.766 [2024-10-17 16:55:08.153339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.766 qpair failed and we were unable to recover it. 00:26:54.766 [2024-10-17 16:55:08.153451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.153479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.153564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.153592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.153732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.153759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.153891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.153933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.154034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.154064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.154185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.154213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.154301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.154328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.154413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.154440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.154526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.154553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.154643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.154670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.154799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.154830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.154933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.154959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.155111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.155140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.155221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.155247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.155330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.155361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.155470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.155500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.155620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.155650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.155776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.155806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.155897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.155928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.156084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.156127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.156233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.156264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.156355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.156384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.156497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.156545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.156638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.156665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.156809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.156838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.156956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.156983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.157100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.157127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.157233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.157262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.157431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.157489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.157689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.157756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.157926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.157954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.158093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.158134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.158258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.158304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.158449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.158496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.158607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.158638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.158854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.158882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.158971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.159006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.159119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.159149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.159298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.159368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.159517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.159575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.159741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.767 [2024-10-17 16:55:08.159799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.767 qpair failed and we were unable to recover it. 00:26:54.767 [2024-10-17 16:55:08.159907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.159958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.160103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.160130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.160214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.160251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.160360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.160387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.160535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.160588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.160716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.160746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.160897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.160926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.161047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.161075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.161154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.161181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.161290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.161317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.161423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.161452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.161561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.161603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.161700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.161730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.161853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.161882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.162013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.162055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.162184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.162224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.162322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.162367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.162529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.162560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.162682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.162712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.162814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.162844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.162999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.163064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.163192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.163221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.163369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.163411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.163502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.163531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.163661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.163690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.163770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.163799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.163885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.163932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.164051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.164085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.164179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.164228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.164363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.164393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.164607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.164637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.164734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.164764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.164858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.164889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.165011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.165052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.165162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.165190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.165327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.165374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.165605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.165650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.165808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.165858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.165972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.166015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.166115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.166142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.768 [2024-10-17 16:55:08.166262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.768 [2024-10-17 16:55:08.166305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.768 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.166491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.166541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.166685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.166739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.166867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.166896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.167055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.167096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.167203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.167232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.167378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.167422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.167531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.167563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.167683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.167713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.167814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.167845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.167984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.168018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.168172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.168199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.168304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.168334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.168513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.168565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.168691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.168725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.168850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.168880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.168976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.169013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.169187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.169214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.169337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.169370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.169479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.169507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.169680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.169711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.169838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.169866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.169977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.170012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.170145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.170172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.170322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.170361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.170444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.170472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.170584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.170611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.170709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.170738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.170893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.170934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.171046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.171079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.171207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.171239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.171429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.171487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.171706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.171762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.171892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.171921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.172070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.172101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.172202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.172232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.769 qpair failed and we were unable to recover it. 00:26:54.769 [2024-10-17 16:55:08.172384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.769 [2024-10-17 16:55:08.172414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.172636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.172701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.172827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.172860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.172965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.173021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.173159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.173201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.173326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.173371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.173501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.173532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.173628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.173659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.173748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.173791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.174396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.174433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.174589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.174648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.174774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.174805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.175358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.175390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.175563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.175593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.175693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.175723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.175850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.175878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.176034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.176070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.176153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.176181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.176297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.176340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.176466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.176496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.176622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.176652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.176766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.176792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.176924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.176950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.177058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.177100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.177199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.177227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.177346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.177373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.177469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.177494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.177661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.177704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.177853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.177882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.178014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.178058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.178145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.178171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.178281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.178308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.178472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.178497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.178636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.178664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.178758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.178787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.178877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.178906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.179012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.179066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.179148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.179174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.179293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.179319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.179416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.179444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.179555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.179580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.770 [2024-10-17 16:55:08.179686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.770 [2024-10-17 16:55:08.179714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.770 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.179828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.179857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.179959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.179984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.180123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.180151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.180301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.180352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.180511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.180542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.180669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.180699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.180830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.180859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.180995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.181030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.181120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.181146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.181260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.181287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.181395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.181421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.181546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.181573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.181739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.181767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.181873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.181898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.181994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.182031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.182136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.182162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.182243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.182287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.182406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.182433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.182543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.182572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.182684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.182710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.182830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.182861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.182980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.183015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.183168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.183198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.183280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.183309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.183412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.183441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.183570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.183599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.183697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.183727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.183846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.183875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.183967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.183995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.184111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.184137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.184262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.184296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.184378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.184407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.184488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.184517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.184603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.184632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.184747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.184775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.184873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.184901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.184994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.185047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.185124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.185149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.185238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.185263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.185399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.771 [2024-10-17 16:55:08.185427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.771 qpair failed and we were unable to recover it. 00:26:54.771 [2024-10-17 16:55:08.185525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.185567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.185658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.185683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.185799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.185825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.185907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.185933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.186028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.186054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.186138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.186165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.186282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.186308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.186420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.186446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.186546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.186575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.186717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.186743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.186823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.186849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.186951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.186991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.187121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.187149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.187262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.187292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.187436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.187465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.187582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.187609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.187696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.187722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.187850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.187885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.188006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.188034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.188125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.188151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.188238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.188264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.188377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.188403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.188528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.188554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.188640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.188668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.188795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.188822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.188901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.188947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.189065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.189092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.189173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.189199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.189311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.189337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.189467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.189498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.189636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.189662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.189761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.189788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.189899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.189925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.190009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.190037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.190121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.190147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.190233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.190259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.194124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.194166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.194314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.194347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.194444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.194473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.194608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.194634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.194729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.772 [2024-10-17 16:55:08.194755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.772 qpair failed and we were unable to recover it. 00:26:54.772 [2024-10-17 16:55:08.194890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.194920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.195041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.195068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.195175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.195202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.195314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.195346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.195437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.195464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.195577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.195604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.195690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.195716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.195828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.195853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.195999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.196055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.196165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.196203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.196308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.196344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.196429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.196456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.196544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.196569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.196688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.196714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.196811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.196838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.197023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.197088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.197211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.197239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.197370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.197396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.197608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.197660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.197790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.197816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.197918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.197957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.198113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.198141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.198261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.198288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.198417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.198460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.198588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.198640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.198740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.198768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.198851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.198877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.198991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.199042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.199190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.199218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.199340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.199366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.199490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.199523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.199678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.199705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.199817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.199843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.199940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.199966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.200064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.200092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.200209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.200235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.200350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.200376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.200457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.200483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.200569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.200595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.200705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.773 [2024-10-17 16:55:08.200731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.773 qpair failed and we were unable to recover it. 00:26:54.773 [2024-10-17 16:55:08.200815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.200843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.200934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.200961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.201055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.201085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.201202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.201235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.201330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.201357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.201473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.201499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.201590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.201616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.201739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.201778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.201901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.201928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.202026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.202053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.202132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.202158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.202272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.202298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.202411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.202438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.202519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.202545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.202693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.202718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.202812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.202839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.202927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.202959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.203085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.203112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.203229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.203255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.203371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.203397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.203538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.203564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.203676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.203702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.203790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.203816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.203937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.203976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.204118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.204145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.204236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.204262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.204345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.204374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.204490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.204516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.204599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.204625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.204719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.204747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.204897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.204955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.205066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.205095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.205228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.205277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.205397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.205443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.205583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.205627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.205789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.205844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.205974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.774 [2024-10-17 16:55:08.206032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.774 qpair failed and we were unable to recover it. 00:26:54.774 [2024-10-17 16:55:08.206153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.206180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.206290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.206319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.206471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.206522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.206628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.206657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.206781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.206810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.206904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.206934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.207070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.207102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.207205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.207234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.207375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.207404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.207514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.207545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.207644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.207672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.207787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.207830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.207988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.208044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.208211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.208242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.208348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.208380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.208501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.208530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.208622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.208651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.208742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.208771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.208873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.208912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.209052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.209082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.209212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.209249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.209385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.209429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.209525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.209555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.209708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.209734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.209833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.209862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.209961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.210011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.210105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.210133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.210266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.210295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.210386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.210414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.210544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.210609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.210739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.210768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.210871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.210902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.211061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.211101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.211221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.211255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.211371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.211402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.211523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.211552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.211705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.211734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.211825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.211854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.211963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.211994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.775 [2024-10-17 16:55:08.212115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.775 [2024-10-17 16:55:08.212143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.775 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.212277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.212304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.212402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.212429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.212615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.212667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.212791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.212820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.212947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.212978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.213109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.213137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.213220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.213246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.213346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.213372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.213467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.213506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.213640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.213670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.213797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.213825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.213925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.213954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.214104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.214144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.214274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.214320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.214418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.214446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.214532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.214558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.214667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.214698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.214823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.214854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.215015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.215068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.215153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.215180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.215264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.215309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.215463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.215516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.215650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.215685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.215780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.215810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.215929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.215958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.216110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.216141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.216278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.216328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.216485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.216537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.216630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.216656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.216742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.216768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.216863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.216891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.216987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.217023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.217140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.217167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.217311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.217344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.217447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.217473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.217590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.217617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.217734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.217762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.217870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.217909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.218023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.218052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.218167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.218193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.776 [2024-10-17 16:55:08.218313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.776 [2024-10-17 16:55:08.218350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.776 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.218461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.218487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.218588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.218627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.218724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.218763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.218882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.218910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.218998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.219032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.219114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.219159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.219321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.219367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.219512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.219556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.219717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.219763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.219868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.219895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.219989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.220039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.220137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.220169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.220295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.220324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.220467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.220501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.220684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.220741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.220878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.220917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.221030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.221057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.221151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.221178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.221262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.221288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.221443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.221495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.221642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.221694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.221816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.221844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.221970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.221996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.222113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.222142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.222253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.222281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.222373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.222400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.222479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.222505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.222624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.222652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.222766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.222792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.222885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.222911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.222996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.223030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.223120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.223146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.223261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.223292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.223383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.223411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.223514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.223559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.223660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.223690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.223792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.223818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.223944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.223983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.224095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.224134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.224221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.777 [2024-10-17 16:55:08.224249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.777 qpair failed and we were unable to recover it. 00:26:54.777 [2024-10-17 16:55:08.224348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.224377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.224469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.224498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.224617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.224668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.224797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.224828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.224961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.224987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.225134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.225161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.225266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.225295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.225400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.225427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.225563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.225607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.225727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.225754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.225867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.225893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.225994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.226059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.226151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.226178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.226294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.226322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.226436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.226468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.226565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.226591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.226708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.226734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.226847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.226873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.226969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.226995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.227096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.227135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.227234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.227273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.227374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.227401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.227514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.227540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.227656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.227682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.227783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.227822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.227936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.227964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.228083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.228117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.228220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.228250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.228363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.228417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.228596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.228642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.228783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.228832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.228949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.228978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.229080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.229108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.229228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.229256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.229381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.229424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.229557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.778 [2024-10-17 16:55:08.229602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.778 qpair failed and we were unable to recover it. 00:26:54.778 [2024-10-17 16:55:08.229704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.229751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.229837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.229863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.229995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.230055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.230153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.230181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.230318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.230344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.230454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.230485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.230605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.230657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.230762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.230792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.230907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.230934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.231024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.231050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.231139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.231166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.231302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.231331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.231456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.231484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.231577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.231606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.231699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.231728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.231860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.231889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.232022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.232069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.232166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.232194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.232325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.232369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.232500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.232530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.232663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.232692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.232800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.232827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.232941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.232968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.233100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.233134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.233220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.233246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.233333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.233374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.233499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.233529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.233654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.233683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.233810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.233840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.233972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.234007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.234116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.234161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.234276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.234316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.234421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.234450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.234554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.234582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.234710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.234756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.234852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.234878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.234965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.234992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.235086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.235113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.235220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.235247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.235389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.235415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.235496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.779 [2024-10-17 16:55:08.235522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.779 qpair failed and we were unable to recover it. 00:26:54.779 [2024-10-17 16:55:08.235611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.235638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.235751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.235778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.235871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.235898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.236018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.236049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.236146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.236174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.236262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.236289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.236402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.236428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.236511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.236538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.236639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.236664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.236781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.236807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.236904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.236931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.237018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.237044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.237122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.237148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.237276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.237306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.237445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.237488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.237621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.237665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.237752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.237779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.237945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.237984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.238150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.238202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.238361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.238411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.238507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.238536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.238682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.238733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.238833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.238871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.238987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.239025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.239157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.239202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.239289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.239317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.239415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.239454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.239554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.239581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.239687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.239713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.239825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.239852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.239951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.239990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.240100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.240128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.240212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.240238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.240364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.240390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.240524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.240553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.240646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.240676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.240773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.240802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.240917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.240960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.241080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.241108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.241241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.241271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.780 [2024-10-17 16:55:08.241364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.780 [2024-10-17 16:55:08.241392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.780 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.241479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.241508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.241609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.241637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.241807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.241834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.241964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.242014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.242139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.242178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.242285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.242316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.242412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.242442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.242537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.242568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.242713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.242744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.242846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.242873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.242960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.242988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.243094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.243120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.243230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.243260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.243386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.243414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.243519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.243549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.243645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.243675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.243791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.243821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.243945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.243973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.244135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.244182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.244309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.244353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.244483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.244534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.244647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.244673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.244778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.244807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.244949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.244993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.245147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.245176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.245285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.245340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.245435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.245474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.245682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.245738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.245849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.245875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.245989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.246027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.246109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.246135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.246242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.246271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.246376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.246402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.246479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.246505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.246586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.246612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.246700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.246727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.246855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.246894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.246985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.247024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.247116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.247142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.247225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.781 [2024-10-17 16:55:08.247252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.781 qpair failed and we were unable to recover it. 00:26:54.781 [2024-10-17 16:55:08.247331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.247375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.247492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.247520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.247618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.247648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.247729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.247757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.247937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.247980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.248102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.248149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.248306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.248339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.248445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.248474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.248626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.248661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.248752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.248780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.248907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.248937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.249035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.249081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.249167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.249195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.249347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.249396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.249480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.249508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.249650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.249698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.249821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.249848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.249937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.249967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.250092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.250123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.250218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.250248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.250346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.250375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.250467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.250497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.250616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.250661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.250776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.250803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.250903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.250931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.251025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.251063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.251162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.251192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.251310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.251339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.251443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.251472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.251577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.251604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.251742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.251770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.251858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.251885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.251991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.252024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.252125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.252155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.252337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.252385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.252537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.782 [2024-10-17 16:55:08.252586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.782 qpair failed and we were unable to recover it. 00:26:54.782 [2024-10-17 16:55:08.252671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.252698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.252823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.252851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.252935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.252962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.253104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.253135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.253229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.253258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.253353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.253381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.253468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.253497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.253605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.253651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.253797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.253836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.253979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.254017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.254136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.254162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.254333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.254362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.254481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.254517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.254641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.254671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.254813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.254841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.254959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.254986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.255136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.255181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.255313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.255357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.255489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.255532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.255620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.255646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.255790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.255827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.255902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.255928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.256017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.256044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.256126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.256151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.256243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.256270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.256354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.256381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.256503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.256529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.256640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.256666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.256765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.256793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.256910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.256939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.257034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.257073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.257229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.257267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.257385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.257430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.257572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.257623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.257724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.257753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.257844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.257873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.258015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.258043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.258147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.258177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.258319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.258351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.258480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.258523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.258623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.783 [2024-10-17 16:55:08.258648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.783 qpair failed and we were unable to recover it. 00:26:54.783 [2024-10-17 16:55:08.258731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.258759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.258844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.258881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.258996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.259031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.259149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.259176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.259298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.259326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.259437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.259482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.259591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.259621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.259743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.259772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.259908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.259952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.260101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.260128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.260231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.260261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.260417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.260481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.260672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.260722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.260840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.260869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.260968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.260994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.261092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.261119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.261220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.261262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.261364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.261404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.261532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.261562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.261659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.261689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.261809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.261838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.261990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.262054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.262146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.262175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.262287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.262325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.262412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.262440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.262535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.262562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.262716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.262766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.262855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.262884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.262979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.263017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.263141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.263181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.263290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.263318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.263430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.263457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.263533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.263577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.263711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.263740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.263885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.263928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.264053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.264081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.264203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.264232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.264323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.264349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.264450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.264481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.264579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.264609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.264708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.784 [2024-10-17 16:55:08.264737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.784 qpair failed and we were unable to recover it. 00:26:54.784 [2024-10-17 16:55:08.264846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.264871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.264969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.264996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.265115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.265161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.265278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.265307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.265398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.265427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.265535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.265567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.265758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.265805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.265929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.265968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.266095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.266140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.266267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.266296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.266438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.266487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.266635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.266685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.266776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.266805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.266909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.266935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.267029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.267056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.267182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.267209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.267348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.267380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.267481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.267507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.267615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.267668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.267785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.267829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.267916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.267943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.268053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.268089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.268207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.268236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.268330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.268356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.268449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.268476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.268601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.268630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.268801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.268844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.268991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.269025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.269128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.269159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.269264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.269292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.269396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.269441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.269567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.269604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.269703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.269730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.269860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.269900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.269989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.270023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.270107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.270134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.270223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.270250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.270334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.270376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.270491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.270542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.785 [2024-10-17 16:55:08.270669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.785 [2024-10-17 16:55:08.270709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.785 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.270828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.270860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.270988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.271027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.271150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.271181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.271293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.271321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.271445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.271474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.271584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.271630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.271734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.271760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.271877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.271902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.272028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.272055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.272164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.272194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.272330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.272374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.272485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.272516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.272615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.272644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.272734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.272763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.272923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.272951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.273073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.273099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.273188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.273213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.273305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.273340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.273498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.273526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.273648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.273676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.273800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.273828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.273948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.273988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.274119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.274148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.274269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.274302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.274421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.274477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.274658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.274715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.274816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.274845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.274992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.275031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.275135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.275163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.275249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.275293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.275420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.275449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.275556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.275585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.275709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.275737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.275853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.275897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.276025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.276086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.786 [2024-10-17 16:55:08.276212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.786 [2024-10-17 16:55:08.276241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.786 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.276375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.276425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.276603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.276654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.276743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.276770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.276922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.276948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.277111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.277154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.277253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.277283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.277379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.277416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.277553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.277602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.277715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.277743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.277858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.277885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.277995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.278029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.278109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.278135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.278223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.278248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.278442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.278473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.278624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.278652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.278783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.278815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.278931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.278957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.279052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.279079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.279170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.279196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.279334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.279362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.279498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.279527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.279643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.279694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.279809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.279838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.279925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.279954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.280071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.280098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.280185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.280211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.280314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.280373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.280481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.280511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.280680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.280745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.280869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.280906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.281050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.281078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.281203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.281248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.281364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.281390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.281507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.281533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.281643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.281669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.281781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.281830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.281966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.282016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.282146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.282197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.282292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.282319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.282429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.282455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.787 [2024-10-17 16:55:08.282572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.787 [2024-10-17 16:55:08.282599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.787 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.282742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.282775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.282922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.282961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.283100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.283130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.283233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.283260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.283388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.283432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.283536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.283565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.283711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.283759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.283859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.283885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.283981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.284029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.284149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.284177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.284290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.284331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.284502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.284549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.284660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.284705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.284851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.284877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.284965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.284998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.285089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.285116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.285214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.285243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.285388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.285438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.285553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.285603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.285774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.285828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.285942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.285969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.286082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.286121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.286236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.286267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.286367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.286397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.286579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.286629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.286824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.286852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.286950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.286990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.287098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.287131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.287220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.287247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.287357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.287406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.287565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.287619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.287715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.287744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.287841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.287872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.288019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.288060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.288156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.288184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.288296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.288325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.288486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.288533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.288636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.288681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.288766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.288793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.288881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.788 [2024-10-17 16:55:08.288907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.788 qpair failed and we were unable to recover it. 00:26:54.788 [2024-10-17 16:55:08.289027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.289056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.289150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.289177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.289304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.289333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.289425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.289455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.289564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.289607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.289759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.289804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.289896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.289923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.290038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.290067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.290182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.290209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.290314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.290343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.290437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.290468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.290579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.290622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.290757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.290787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.290903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.290933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.291026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.291076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.291194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.291241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.291373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.291418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.291545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.291591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.291697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.291726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.291845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.291874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.292009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.292057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.292200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.292226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.292339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.292364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.292474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.292499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.292611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.292636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.292743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.292769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.292864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.292890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.293015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.293042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.293167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.293193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.293276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.293301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.293410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.293439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.293539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.293567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.293688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.293718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.293855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.293883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.294012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.294041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.294152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.294179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.294306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.294337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.294438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.294464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.294570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.294599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.294695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.294725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.789 [2024-10-17 16:55:08.294853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.789 [2024-10-17 16:55:08.294881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.789 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.294962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.294994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.295115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.295140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.295247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.295272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.295410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.295438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.295524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.295552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.295643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.295671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.295780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.295836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.295956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.295984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.296130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.296160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.296282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.296311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.296457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.296486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.296602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.296641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.296781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.296810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.296930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.296958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.297077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.297103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.297212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.297240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.297385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.297433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.297548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.297573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.297733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.297761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.297901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.297946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.298066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.298095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.298212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.298241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.298424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.298477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.298574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.298603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.298756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.298805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.298894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.298921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.299011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.299037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.299149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.299178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.299345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.299381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.299490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.299518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.299659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.299704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.299852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.299880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.300006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.300058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.300147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.300172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.300253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.300279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.300388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.300414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.300544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.300572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.300679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.300705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.300864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.300906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.301010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.301055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.790 qpair failed and we were unable to recover it. 00:26:54.790 [2024-10-17 16:55:08.301170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.790 [2024-10-17 16:55:08.301196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.301297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.301322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.301460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.301486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.301644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.301673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.301766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.301795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.301909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.301937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.302045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.302071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.302184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.302210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.302327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.302371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.302492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.302541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.302714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.302767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.302891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.302920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.303057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.303083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.303196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.303223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.303321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.303358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.303508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.303537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.303641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.303671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.303766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.303796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.303976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.304025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.304117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.304145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.304264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.304291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.304419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.304462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.304591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.304637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.304761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.304790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.304924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.304951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.305047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.305075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.305170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.305196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.305277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.305304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.305421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.305448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.305539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.305566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.305658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.305684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.305810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.305849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.305948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.305976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.306121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.306147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.306257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.306283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.306398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.306425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.791 [2024-10-17 16:55:08.306499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.791 [2024-10-17 16:55:08.306524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.791 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.306661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.306707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.306812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.306852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.306944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.306972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.307082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.307109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.307207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.307236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.307384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.307412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.307535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.307566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.307759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.307810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.307926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.307952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.308047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.308074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.308206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.308250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.308415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.308458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.308557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.308601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.308708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.308735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.308827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.308853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.308962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.308988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.309085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.309112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.309192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.309223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.309320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.309347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.309432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.309458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.309541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.309568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.309704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.309730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.309810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.309836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.309937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.309976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.310088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.310127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.310251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.310279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.310433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.310463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.310554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.310598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.310681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.310707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.310810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.310836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.310919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.310966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.311122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.311164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.311269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.311300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.311436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.311487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.311635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.311684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.311823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.311851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.311939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.311966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.312131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.312174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.312276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.312306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.312461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.312510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.792 [2024-10-17 16:55:08.312594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.792 [2024-10-17 16:55:08.312623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.792 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.312761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.312810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.312910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.312935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.313068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.313094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.313192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.313219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.313331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.313361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.313487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.313536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.313722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.313750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.313914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.313940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.314036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.314063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.314148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.314173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.314339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.314367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.314496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.314524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.314687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.314736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.314871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.314900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.315022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.315049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.315196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.315222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.315335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.315366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.315482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.315508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.315601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.315628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.315741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.315768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.315859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.315884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.315993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.316024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.316107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.316133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.316225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.316252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.316392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.316418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.316528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.316571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.316656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.316683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.316799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.316825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.316933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.316958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.317080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.317106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.317245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.317300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.317496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.317526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.317682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.317710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.317829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.317857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.317968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.317995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.318096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.318122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.318216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.318242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.318431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.318479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.318631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.318682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.318805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.318833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.793 [2024-10-17 16:55:08.318934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.793 [2024-10-17 16:55:08.318959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.793 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.319084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.319110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.319200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.319226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.319335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.319369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.319497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.319526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.319629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.319657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.319744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.319774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.319895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.319923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.320042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.320084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.320198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.320225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.320305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.320331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.320455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.320484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.320608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.320652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.320773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.320814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.320914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.320942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.321093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.321119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.321227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.321252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.321367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.321396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.321503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.321544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.321702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.321730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.321849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.321878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.322039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.322079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.322210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.322249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.322427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.322470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.322565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.322595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.322701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.322726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.322863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.322892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.323013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.323042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.323172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.323197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.323305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.323331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.323444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.323474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.323623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.323653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.323807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.323836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.323958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.323986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.324144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.324184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.324329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.324377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.324534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.324585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.324672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.324701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.324878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.324925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.325035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.325061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.794 [2024-10-17 16:55:08.325147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.794 [2024-10-17 16:55:08.325173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.794 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.325281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.325306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.325446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.325482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.325630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.325659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.325791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.325821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.325979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.326032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.326125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.326153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.326270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.326296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.326396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.326426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.326556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.326600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.326700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.326729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.326833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.326877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.326997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.327035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.327151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.327178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.327262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.327306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.327401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.327431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.327545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.327586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.327761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.327790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.327898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.327924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.328006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.328033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.328141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.328167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.328272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.328303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.328415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.328444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.328549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.328577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.328691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.328718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.328817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.328856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.328939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.328967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.329090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.329118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.329214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.329240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.329326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.329352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.329491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.329521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.329649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.329678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.329804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.329832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.329942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.329986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.330123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.795 [2024-10-17 16:55:08.330157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.795 qpair failed and we were unable to recover it. 00:26:54.795 [2024-10-17 16:55:08.330303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.330332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.330448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.330478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.330615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.330659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.330753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.330779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.330898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.330925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.331042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.331081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.331174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.331201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.331281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.331307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.331395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.331421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.331567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.331592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.331671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.331697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.331802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.331827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.331915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.331954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.332080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.332108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.332240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.332269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.332407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.332458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.332597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.332642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.332819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.332848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.332946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.332976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.333106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.333145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.333242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.333271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.333404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.333454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.333585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.333635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.333770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.333796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.333910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.333937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.334026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.334052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.334155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.334184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.334305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.334332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.334481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.334507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.334615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.796 [2024-10-17 16:55:08.334641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.796 qpair failed and we were unable to recover it. 00:26:54.796 [2024-10-17 16:55:08.334768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.334806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.334915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.334953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.335060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.335089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.335220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.335249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.335337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.335366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.335457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.335486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.335596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.335625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.335718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.335748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.335861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.335889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.335994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.336028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.336126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.336155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.336281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.336310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.336417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.336443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.336543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.336572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.336696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.336723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.336839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.336865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.336978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.337013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.337099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.337125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.337213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.337240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.337399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.337425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.337513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.337539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.337620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.337646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.337723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.337750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.337855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.337881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.337987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.338020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.338172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.338199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.338289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.338315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.338434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.338461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.338548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.338574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.338669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.338695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.338776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.338802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.338882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.338909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.338994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.339031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.339139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.339166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.339291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.339317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.339438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.339465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.339591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.339617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.339714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.339753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.339900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.339928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.340017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.340044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.340157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.797 [2024-10-17 16:55:08.340182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.797 qpair failed and we were unable to recover it. 00:26:54.797 [2024-10-17 16:55:08.340321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.340346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.340431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.340456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.340606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.340657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.340748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.340776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.340890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.340918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.341089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.341140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.341269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.341298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.341422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.341466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.341595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.341624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.341735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.341761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.341849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.341875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.342016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.342043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.342157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.342183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.342276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.342302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.342393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.342420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.342507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.342533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.342622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.342648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.342742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.342768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.342879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.342905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.342994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.343047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.343205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.343251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.343340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.343366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.343446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.343472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.343552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.343578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.343666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.343694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.343805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.343844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.343964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.343992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.344120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.344146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.344232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.344258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.344372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.344397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.344503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.344529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.344661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.344698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.344797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.344828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.344939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.344982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.345163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.345208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.345308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.345335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.345448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.345492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.345619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.345663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.345744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.345771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.345904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.345943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.798 [2024-10-17 16:55:08.346043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.798 [2024-10-17 16:55:08.346072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.798 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.346161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.346187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.346303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.346329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.346464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.346490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.346571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.346598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.346693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.346720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.346874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.346912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.347032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.347060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.347150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.347177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.347271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.347300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.347421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.347450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.347542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.347572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.347658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.347687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.347806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.347835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.347928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.347974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.348067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.348094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.348187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.348213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.348401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.348457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.348699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.348747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.348842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.348870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.348995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.349031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.349157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.349199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.349305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.349334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.349438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.349467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.349551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.349580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.349676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.349707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.349829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.349860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.349997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.350029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.350145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.350171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.350253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.350278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.350419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.350447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.350566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.350595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.350690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.350720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.350888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.350914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.351009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.351036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.351117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.799 [2024-10-17 16:55:08.351143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.799 qpair failed and we were unable to recover it. 00:26:54.799 [2024-10-17 16:55:08.351236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.351264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.351363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.351391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.351480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.351508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.351624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.351652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.351730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.351758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.351850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.351893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.351972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.351997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.352088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.352130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.352276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.352304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.352469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.352526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.352615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.352643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.352764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.352792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.352906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.352932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.353030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.353057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.353215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.353245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.353362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.353411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.353589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.353640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.353795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.353842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.353962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.353989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.354088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.354114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.354251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.354293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.354452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.354505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.354683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.354708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.354865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.354896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.355032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.355060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.355172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.355199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.355340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.355370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.355485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.355527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.355623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.355653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.355775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.355805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.355936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.355961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.356082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.356109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.356188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.356213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.356318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.356343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.356453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.356479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.356644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.356673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.356829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.356858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.356997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.357030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.357148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.357174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.357289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.357316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.800 [2024-10-17 16:55:08.357405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.800 [2024-10-17 16:55:08.357430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.800 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.357572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.357601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.357716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.357758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.357909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.357937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.358032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.358060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.358161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.358186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.358282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.358307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.358450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.358477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.358724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.358773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.358870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.358917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.359016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.359044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.359137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.359163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.359243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.359269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.359379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.359409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.359606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.359656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.359782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.359810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.359934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.359964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.360128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.360154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.360242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.360284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.360385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.360413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.360559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.360587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.360707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.360735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.360829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.360859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.360984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.361031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.361169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.361207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.361357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.361384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.361515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.361544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.361692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.361720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.361852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.361881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.362007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.362066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.362182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.362210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.362313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.362341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.362432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.362458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.362631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.362681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.362803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.362831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.362966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.362995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.363172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.363202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.363335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.363364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.363480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.363531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.363628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.363656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.363761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.363792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.801 qpair failed and we were unable to recover it. 00:26:54.801 [2024-10-17 16:55:08.363892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.801 [2024-10-17 16:55:08.363922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.364075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.364115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.364230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.364262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.364370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.364398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.364536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.364581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.364674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.364700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.364843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.364869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.364956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.364982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.365077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.365103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.365227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.365253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.365339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.365365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.365448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.365473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.365621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.365647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.365760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.365787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.365884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.365923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.366056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.366096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.366228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.366267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.366434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.366464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.366625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.366678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.366779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.366807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.366977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.367013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.367127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.367156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.367310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.367375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.367489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.367516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.367682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.367711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.367861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.367889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.367982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.368017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.368165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.368212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.368327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.368357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.368490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.368519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.368614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.368643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.368758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.368786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.368901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.368929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.369043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.369086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.369204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.369230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.369328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.369370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.369507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.369535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.369651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.369679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.369817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.369860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.369951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.369981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.802 [2024-10-17 16:55:08.370122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.802 [2024-10-17 16:55:08.370150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.802 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.370257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.370287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.370431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.370477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.370559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.370585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.370719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.370759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.370917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.370945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.371067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.371096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.371218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.371247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.371376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.371404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.371502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.371530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.371689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.371740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.371881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.371906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.372037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.372067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.372215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.372260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.372384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.372427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.372559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.372602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.372738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.372765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.372884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.372911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.373025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.373054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.373193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.373218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.373321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.373364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.373476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.373502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.373590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.373620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.373712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.373755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.373847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.373872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.373959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.373984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.374125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.374153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.374272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.374299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.374447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.374476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.374635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.374682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.374837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.374880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.374991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.375059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.375156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.375184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.375347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.375374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.375505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.375532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.375649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.375688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.375820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.375847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.375981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.376017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.376156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.376182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.376304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.376331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.376481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.376524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.803 [2024-10-17 16:55:08.376646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.803 [2024-10-17 16:55:08.376689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.803 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.376800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.376826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.376946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.376984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.377125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.377151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.377248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.377273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.377423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.377451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.377596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.377624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.377779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.377808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.377954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.377996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.378104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.378131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.378252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.378277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.378392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.378419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.378537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.378579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.378676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.378707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.378813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.378844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.378945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.378971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.379071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.379098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.379212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.379237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.379358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.379383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.379546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.379594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.379674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.379702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.379822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.379851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.380016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.380048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.380160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.380188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.380311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.380338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.380438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.380466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.380579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.380631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.380754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.380782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.380870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.380898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.380994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.381044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.381130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.381163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.381249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.381293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.381420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.381450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.381567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.381607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.381756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.804 [2024-10-17 16:55:08.381801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.804 qpair failed and we were unable to recover it. 00:26:54.804 [2024-10-17 16:55:08.381896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.381923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.382070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.382101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.382235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.382261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.382350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.382377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.382493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.382519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.382602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.382628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.382713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.382751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.382838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.382866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.382973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.383022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.383117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.383144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.383257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.383283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.383416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.383443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.383528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.383555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.383692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.383729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.383831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.383860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.383956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.383986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.384174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.384202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.384297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.384325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.384421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.384449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.384604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.384656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.384776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.384820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.384980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.385019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.385112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.385139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.385241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.385280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.385422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.385452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.385583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.385622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.385758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.385788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.385926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.385952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.386066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.386092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.386186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.386212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.386375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.386406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.386520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.386549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.386644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.386672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.386788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.386817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.386933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.386961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.387082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.387109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.387198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.387230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.387365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.387410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.387543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.387587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.805 [2024-10-17 16:55:08.387722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.805 [2024-10-17 16:55:08.387767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.805 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.387857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.387889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.387981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.388014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.388129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.388156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.388246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.388273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.388414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.388442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.388592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.388618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.388734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.388761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.388881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.388908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.389044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.389071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.389188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.389215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.389303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.389330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.389421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.389449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.389561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.389589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.389681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.389708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.389797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.389824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.389921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.389948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.390042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.390071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.390167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.390196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.390340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.390366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.390483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.390509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.390621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.390648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.390735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.390762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.390851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.390879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.390954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.390982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.391105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.391132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.391250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.391284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.391388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.391415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.391511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.391539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.391624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.391652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.391766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.391792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.391886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.391913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.392013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.806 [2024-10-17 16:55:08.392041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.806 qpair failed and we were unable to recover it. 00:26:54.806 [2024-10-17 16:55:08.392154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.392180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.392286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.392312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.392397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.392425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.392567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.392593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.392680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.392706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.392793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.392820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.392937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.392963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.393058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.393085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.393199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.393230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.393330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.393358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.393467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.393494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.393606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.393632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.393730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.393757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.393880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.393908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.394015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.394044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.394134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.394172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.394282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.394318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.394406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.394434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.394513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.394539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.394658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.394685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.394829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.394856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.394961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.394988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.395094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.395121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.395238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.395265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.395389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.395415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.395528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.395555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.395639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.395667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.395768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.395807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.395908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.395936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.396030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.396064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.396153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.396178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.396266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.396292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.396380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.396407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.396512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.396541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.396664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.396692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.396838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.396881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.397013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.397042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.397139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.397168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.397267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.397295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.397408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.397435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.807 qpair failed and we were unable to recover it. 00:26:54.807 [2024-10-17 16:55:08.397524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.807 [2024-10-17 16:55:08.397553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.397643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.397670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.397764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.397796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.397886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.397912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.397987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.398024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.398114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.398145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.398237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.398264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.398382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.398408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.398484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.398515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.398628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.398654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.398747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.398774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.398857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.398896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.399028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.399056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.399140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.399167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.399279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.399306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.399409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.399435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.399547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.399574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.399719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.399745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.399834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.399861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.399962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.399988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.400108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.400135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.400226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.400254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.400352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.400378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.400525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.400552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.400653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.400702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.400807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.400835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.400939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.400977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.401096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.401124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.401237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.401263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.401355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.401381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.401486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.401540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.808 [2024-10-17 16:55:08.401652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.808 [2024-10-17 16:55:08.401679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.808 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.401830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.401857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.401972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.401999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.402099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.402128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.402248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.402275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.402412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.402438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.402589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.402616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.402736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.402762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.402906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.402942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.403033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.403061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.403153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.403179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.403256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.403283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.403426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.403458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.403594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.403621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.403738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.403765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.403848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.403874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.403976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.404009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.404110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.404142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.404228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.404256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.404373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.404400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.404503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.404529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.404642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.404668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.404769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.404796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.404874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.404901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.405024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.405053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.405141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.405168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.405251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.405279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.405367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.405393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.405467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.405494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.405614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.405640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.405765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.405791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.405875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.405901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.405980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.406012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.406110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.406137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.406238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.406264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.809 [2024-10-17 16:55:08.406429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.809 [2024-10-17 16:55:08.406456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.809 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.406535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.406562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.406707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.406740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.406879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.406906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.407018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.407045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.407143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.407170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.407257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.407283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.407395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.407421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.407513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.407539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.407646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.407685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.407833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.407872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.407964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.407993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.408120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.408147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.408241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.408269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.408392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.408418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.408542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.408569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.408667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.408696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.408812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.408850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.408946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.408973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.409097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.409124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.409207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.409233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.409371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.409396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.409482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.409508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.409596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.409622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.409773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.409798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:54.810 [2024-10-17 16:55:08.409879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.810 [2024-10-17 16:55:08.409906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:54.810 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.410012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.410052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.410174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.410202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.410293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.410320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.410404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.410431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.410537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.410564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.410681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.410708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.410814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.410840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.410925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.410951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.411064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.411091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.411199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.411225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.411346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.411373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.411464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.411490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.411581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.411608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.411760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.411789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.411873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.411898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.412019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.412052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.412144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.412171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.412263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.412289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.412387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.412415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.412508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.412534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.412664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.412703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.412796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.412824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.412909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.412936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.413029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.413060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.413151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.413178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.413265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.413291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.413375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.413401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.413518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.413544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.413638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.413665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.413754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.413781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.094 [2024-10-17 16:55:08.413894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.094 [2024-10-17 16:55:08.413920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.094 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.414014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.414042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.414125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.414151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.414246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.414272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.414365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.414391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.414477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.414505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.414600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.414626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.414720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.414748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.414871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.414897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.414979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.415011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.415094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.415121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.415237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.415263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.415375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.415402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.415501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.415526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.415640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.415667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.415756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.415783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.415866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.415891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.415999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.416037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.416120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.416147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.416229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.416254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.416380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.416408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.416522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.416549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.416644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.416672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.416763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.416790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.416871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.416898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.416981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.417030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.417115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.417142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.417254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.417281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.417364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.417390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.417488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.417514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.417616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.417657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.417772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.417799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.417920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.417948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.418038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.418079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.418170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.095 [2024-10-17 16:55:08.418197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.095 qpair failed and we were unable to recover it. 00:26:55.095 [2024-10-17 16:55:08.418303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.418332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.418456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.418482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.418599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.418626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.418707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.418732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.418829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.418860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.418980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.419012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.419102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.419130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.419219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.419246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.419360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.419386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.419551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.419580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.419705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.419734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.419853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.419883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.420031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.420059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.420150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.420177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.420325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.420354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.420463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.420493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.420622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.420651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.420789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.420818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.420955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.420984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.421105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.421134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.421229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.421268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.421381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.421426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.421523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.421551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.421668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.421695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.421790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.421818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.421959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.421992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.422142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.422187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.422276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.422304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.422420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.422447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.422539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.422568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.422669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.422695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.422782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.422810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.422898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.422924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.423031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.423058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.423156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.423182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.423271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.423299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.423412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.423439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.096 qpair failed and we were unable to recover it. 00:26:55.096 [2024-10-17 16:55:08.423517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.096 [2024-10-17 16:55:08.423544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.423657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.423684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.423818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.423858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.423964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.424009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.424106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.424135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.424217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.424243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.424360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.424404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.424531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.424579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.424809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.424857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.424954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.424982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.425116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.425145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.425300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.425347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.425476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.425507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.425624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.425669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.425751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.425777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.425883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.425924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.426075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.426119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.426219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.426249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.426359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.426393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.426504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.426532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.426650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.426699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.426823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.426851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.426958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.427010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.427150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.427178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.427296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.427325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.427432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.427462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.427559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.427588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.427713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.427743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.427833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.427881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.427995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.428028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.428137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.428163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.428250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.428288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.428381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.428424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.428552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.428582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.428677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.097 [2024-10-17 16:55:08.428706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.097 qpair failed and we were unable to recover it. 00:26:55.097 [2024-10-17 16:55:08.428836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.428865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.428992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.429059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.429147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.429174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.429261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.429297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.429384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.429427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.429513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.429542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.429669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.429696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.429820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.429847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.429982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.430019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.430128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.430155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.430240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.430269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.430387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.430416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.431180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.431220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.431359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.431388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.431506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.431534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.431625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.431652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.431776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.431801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.431909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.431936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.432020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.432047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.432159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.432185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.432309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.432347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.432481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.432509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.432641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.432670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.432772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.432801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.432893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.432921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.433026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.433054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.433132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.433158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.433242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.433267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.433404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.433459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.433588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.433613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.433700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.433743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.433847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.433891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.098 qpair failed and we were unable to recover it. 00:26:55.098 [2024-10-17 16:55:08.433994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.098 [2024-10-17 16:55:08.434030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.434109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.434134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.434226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.434260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.434374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.434402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.434514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.434541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.434626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.434653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.434769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.434796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.434923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.434949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.435044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.435071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.435167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.435194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.435275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.435300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.435385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.435410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.435499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.435525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.435611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.435637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.435722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.435747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.435865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.435890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.435970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.435995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.436131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.436159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.436253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.436278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.436404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.436429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.436510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.436546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.436633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.436660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.436781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.436807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.436944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.436969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.437075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.437115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.437208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.437239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.437396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.437423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.437518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.437545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.437642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.437668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.437757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.437783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.437871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.099 [2024-10-17 16:55:08.437897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.099 qpair failed and we were unable to recover it. 00:26:55.099 [2024-10-17 16:55:08.437977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.438008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.438099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.438125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.438236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.438262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.438343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.438369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.438479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.438512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.438608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.438635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.438719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.438745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.438851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.438877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.438959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.438985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.439086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.439113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.439207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.439233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.439372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.439398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.439516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.439542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.439670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.439696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.439793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.439818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.439909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.439936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.440046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.440072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.440158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.440183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.440289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.440315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.440398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.440424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.440509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.440535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.440622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.440649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.440742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.440768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.440895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.440935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.441044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.441080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.441173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.441199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.441286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.441315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.441403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.441428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.441511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.441537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.441657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.441685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.441786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.441817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.441912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.441946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.442078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.442107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.442224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.442251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.442339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.442366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.442465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.442491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.100 [2024-10-17 16:55:08.442572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.100 [2024-10-17 16:55:08.442597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.100 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.442691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.442725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.442814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.442841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.442926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.442952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.443039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.443064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.443148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.443173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.443268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.443304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.443457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.443482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.443575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.443600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.443717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.443743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.443846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.443872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.443995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.444028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.444124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.444150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.444242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.444268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.444405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.444432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.444541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.444568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.444651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.444677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.444787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.444812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.444912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.444938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.445024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.445050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.445164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.445189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.445282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.445307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.445391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.445418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.445512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.445537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.445627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.445653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.445764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.445790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.445898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.445938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.446043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.446072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.446182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.446214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.446317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.446345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.446462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.446489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.446599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.446625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.446713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.446739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.446828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.446854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.446940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.446966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.447064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.101 [2024-10-17 16:55:08.447090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.101 qpair failed and we were unable to recover it. 00:26:55.101 [2024-10-17 16:55:08.447213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.447238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.447355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.447380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.447502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.447530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.447644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.447670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.447770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.447809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.447903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.447929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.448043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.448070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.448191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.448217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.448306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.448332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.448453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.448479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.448594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.448620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.448750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.448787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.448883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.448908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.448985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.449027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.449120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.449146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.449224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.449250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.449373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.449398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.449485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.449510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.449598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.449624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.449751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.449780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.449872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.449898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.449981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.450027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.450114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.450140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.450221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.450247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.450369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.450395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.450513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.450538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.450627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.450653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.450743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.450770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.450861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.450886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.451017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.451043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.451135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.451162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.451309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.451334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.451434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.451460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.451561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.451596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.451701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.451740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.451846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.451886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.451979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.452025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.452116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.102 [2024-10-17 16:55:08.452142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.102 qpair failed and we were unable to recover it. 00:26:55.102 [2024-10-17 16:55:08.452228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.452254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.452345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.452378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.452488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.452513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.452601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.452626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.452732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.452757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.452855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.452885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.453040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.453069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.453158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.453186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.453263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.453289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.453384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.453412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.453497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.453523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.453643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.453669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.453774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.453813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.453952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.453997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.454098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.454127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.454213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.454239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.454328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.454354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.454436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.454462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.454547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.454574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.454662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.454689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.454807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.454834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.454949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.454977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.455091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.455121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.455215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.455242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.455357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.455383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.455471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.455497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.455616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.455642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.455725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.455751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.455845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.455873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.455963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.455991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.456083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.456109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.103 [2024-10-17 16:55:08.456193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.103 [2024-10-17 16:55:08.456219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.103 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.456312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.456340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.456463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.456503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.456636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.456663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.456771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.456802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.456929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.456954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.457039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.457067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.457149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.457176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.457267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.457293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.457383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.457409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.457490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.457519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.457615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.457643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.457752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.457779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.457871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.457897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.457988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.458033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.458126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.458152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.458234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.458259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.458397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.458424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.458540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.458566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.458759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.458785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.458875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.458901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.459021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.459047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.459140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.459165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.459242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.459267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.459461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.459486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.459601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.459633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.459715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.459741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.459830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.459858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.460012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.460041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.460130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.460157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.460247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.460273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.460363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.460403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.460513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.104 [2024-10-17 16:55:08.460539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.104 qpair failed and we were unable to recover it. 00:26:55.104 [2024-10-17 16:55:08.460660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.460686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.460763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.460789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.460903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.460930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.461059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.461087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.461178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.461204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.461284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.461309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.461404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.461431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.461554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.461579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.461677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.461702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.461816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.461842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.461939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.461966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.462069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.462109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.462235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.462262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.462350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.462376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.462470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.462495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.462606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.462631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.462750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.462775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.462852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.462878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.463805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.463836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.464017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.464045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.464138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.464167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.464255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.464281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.464387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.464413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.464529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.464555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.464634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.464660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.464777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.464804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.464885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.464910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.465029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.465055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.465135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.465161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.465242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.465268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.465359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.465384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.465457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.465483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.465583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.465609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.465730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.465770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.465869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.465907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.105 qpair failed and we were unable to recover it. 00:26:55.105 [2024-10-17 16:55:08.466020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.105 [2024-10-17 16:55:08.466049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.466135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.466162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.466249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.466276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.466362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.466398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.466596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.466623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.466701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.466726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.466811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.466841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.466934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.466960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.467105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.467145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.467247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.467286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.467414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.467441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.467530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.467556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.467669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.467695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.467781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.467807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.467890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.467916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.468010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.468036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.468119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.468144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.468286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.468320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.468436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.468462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.468555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.468581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.468675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.468700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.468806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.468832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.468968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.469023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.469142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.469168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.469251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.469277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.469393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.469420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.469509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.469535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.469621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.469647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.469730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.469755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.469836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.469862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.470018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.470055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.470140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.470168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.470251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.470278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.470370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.470396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.470473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.470499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.470583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.470609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.470701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.106 [2024-10-17 16:55:08.470727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.106 qpair failed and we were unable to recover it. 00:26:55.106 [2024-10-17 16:55:08.470840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.470869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.470950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.470975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.471074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.471101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.471190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.471216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.471298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.471324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.471429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.471455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.471539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.471566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.471653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.471679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.471791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.471818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.471905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.471930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.472057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.472086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.472167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.472193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.472308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.472335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.472427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.472454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.472545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.472571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.472698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.472725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.472808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.472834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.472938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.472964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.473060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.473086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.473174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.473199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.473281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.473311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.473429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.473455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.473570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.473596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.473716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.473744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.473828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.473855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.473985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.474032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.474157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.474185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.474296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.474323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.474411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.474438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.474554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.474581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.474730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.474756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.474845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.474871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.474981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.475015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.475102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.475128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.107 [2024-10-17 16:55:08.475225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.107 [2024-10-17 16:55:08.475252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.107 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.475393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.475421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.475539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.475567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.475680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.475707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.475793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.475821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.475903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.475933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.476031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.476061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.476142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.476169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.476302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.476327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.476419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.476445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.476556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.476581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.476675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.476701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.476784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.476809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.476906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.476946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.477052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.477080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.477195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.477222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.477321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.477358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.477441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.477468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.477581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.477609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.477709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.477736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.477820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.477847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.477926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.477952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.478063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.478089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.478175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.478201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.478279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.478305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.478422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.478450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.478570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.478600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.478710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.478737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.478820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.478848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.478935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.478962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.108 qpair failed and we were unable to recover it. 00:26:55.108 [2024-10-17 16:55:08.479067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.108 [2024-10-17 16:55:08.479094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.479883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.479927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.480065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.480093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.480183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.480210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.480305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.480331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.480432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.480458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.480555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.480583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.480671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.480699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.480818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.480844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.480954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.480980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.481085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.481112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.481204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.481231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.481322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.481348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.481461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.481488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.481578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.481605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.481717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.481743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.481854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.481881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.481969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.482014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.482105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.482131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.482223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.482249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.482342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.482369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.482456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.482482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.482565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.482591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.482704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.482737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.482818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.482845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.482962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.482990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.483112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.483150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.483247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.483275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.483368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.483394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.483538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.483573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.483683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.483709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.483845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.483872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.483960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.483985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.484131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.109 [2024-10-17 16:55:08.484156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.109 qpair failed and we were unable to recover it. 00:26:55.109 [2024-10-17 16:55:08.484302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.484341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.484436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.484464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.484585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.484614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.484851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.484880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.485019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.485045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.485136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.485162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.485276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.485312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.485396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.485422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.486194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.486225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.486367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.486395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.486535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.486564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.486672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.486700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.486821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.486850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.486972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.487012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.487138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.487177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.487272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.487305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.487424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.487457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.487545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.487590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.487714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.487744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.487831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.487875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.487960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.487987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.488095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.488121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.488200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.488226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.488367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.488397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.488523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.488553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.488666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.488696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.488849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.488890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.489022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.489068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.489160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.489186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.489275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.489301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.489419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.489444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.489587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.489618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.489741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.489769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.489892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.489922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.490077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.490109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.490205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.490232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.490349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.490393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.490511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.490541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.110 [2024-10-17 16:55:08.490671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.110 [2024-10-17 16:55:08.490699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.110 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.490812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.490838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.490924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.490951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.491054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.491081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.491174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.491200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.491293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.491323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.491409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.491445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.491522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.491549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.491634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.491661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.491778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.491805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.491923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.491965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.492078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.492107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.492193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.492221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.492372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.492402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.492585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.492631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.492715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.492741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.492816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.492854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.492995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.493029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.493125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.493156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.493242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.493269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.493438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.493485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.493569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.493597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.493710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.493735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.493847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.493875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.493963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.493989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.494094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.494122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.494221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.494260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.494386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.494425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.494547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.494575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.494661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.494688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.494781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.494807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.494893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.494920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.495026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.495053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.495137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.495163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.495242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.495268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.495430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.495456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.495545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.111 [2024-10-17 16:55:08.495572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.111 qpair failed and we were unable to recover it. 00:26:55.111 [2024-10-17 16:55:08.495685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.495711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.495831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.495859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.495949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.495977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.496090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.496117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.496205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.496232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.496357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.496385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.496492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.496521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.496658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.496702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.496792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.496818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.496926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.496952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.497079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.497106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.497187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.497213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.497305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.497331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.497456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.497483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.497581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.497608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.497697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.497725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.497807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.497834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.497927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.497954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.498063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.498094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.498182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.498208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.498286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.498312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.498392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.498431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.498517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.498561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.498675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.498704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.498800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.498829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.498931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.498960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.499087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.499113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.499205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.499231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.499356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.499384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.499499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.499528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.499615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.499643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.499771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.499800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.499926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.499955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.500056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.500085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.500179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.500206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.500353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.500382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.500498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.112 [2024-10-17 16:55:08.500540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.112 qpair failed and we were unable to recover it. 00:26:55.112 [2024-10-17 16:55:08.500675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.500704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.500793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.500824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.500922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.500948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.501044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.501071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.501151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.501177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.501255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.501280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.501395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.501421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.501493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.501519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.501612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.501638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.501809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.501865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.501967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.501995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.502106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.502146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.502241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.502268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.502373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.502401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.502490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.502517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.502619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.502646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.502724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.502748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.502829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.502856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.502946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.502972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.503069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.503095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.503178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.503205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.503320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.503346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.503465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.503491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.503574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.503602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.503696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.503738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.503855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.503885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.503977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.504013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.504117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.504143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.504248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.504278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.504368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.504396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.504489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.504519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.504614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.504644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.504766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.113 [2024-10-17 16:55:08.504796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.113 qpair failed and we were unable to recover it. 00:26:55.113 [2024-10-17 16:55:08.504885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.504914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.505030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.505057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.505147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.505174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.505261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.505304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.505407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.505437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.505533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.505563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.505682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.505711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.505803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.505832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.505967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.505995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.506121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.506147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.506232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.506258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.506351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.506377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.506524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.506549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.506683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.506726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.506844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.506873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.507008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.507036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.507132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.507158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.507243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.507270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.507355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.507403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.507516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.507559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.507658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.507688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.507809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.507838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.507967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.507993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.508101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.508128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.508210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.508235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.508340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.508365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.508449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.508492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.508619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.508647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.508774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.508802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.508899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.508942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.509032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.509059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.509142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.509168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.509256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.509282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.509385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.509422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.509525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.509581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.509700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.509739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.509853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.509883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.510018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.114 [2024-10-17 16:55:08.510045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.114 qpair failed and we were unable to recover it. 00:26:55.114 [2024-10-17 16:55:08.510134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.510160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.510267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.510294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.510372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.510421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.510524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.510553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.510649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.510679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.510813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.510842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.510944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.510997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.511164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.511192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.511336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.511366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.511455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.511484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.511609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.511638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.511799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.511858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.511977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.512012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.512133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.512161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.512243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.512270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.512367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.512393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.512490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.512519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.512648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.512674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.512790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.512827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.512916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.512942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.513043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.513075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.513189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.513229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.513330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.513371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.513506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.513535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.513631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.513658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.513780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.513807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.513890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.513916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.514023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.514051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.514144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.514173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.514257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.514283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.514424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.514450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.514556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.514582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.514665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.514691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.514813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.514842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.514939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.514967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.515092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.115 [2024-10-17 16:55:08.515136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.115 qpair failed and we were unable to recover it. 00:26:55.115 [2024-10-17 16:55:08.515247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.515277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.515372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.515401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.515493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.515521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.515641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.515669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.515797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.515828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.515966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.515994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.516091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.516118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.516201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.516228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.516337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.516369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.516516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.516563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.516670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.516697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.516819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.516847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.516934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.516960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.517063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.517090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.517173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.517200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.517308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.517334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.517411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.517438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.517532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.517559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.517678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.517705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.517801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.517827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.517942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.517970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.518075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.518102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.518190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.518215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.518332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.518358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.518438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.518469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.518581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.518608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.518712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.518741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.518870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.518908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.519021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.519078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.519196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.519227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.519361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.519391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.519483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.519511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.519635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.519663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.519751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.519780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.519891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.519934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.116 qpair failed and we were unable to recover it. 00:26:55.116 [2024-10-17 16:55:08.520062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.116 [2024-10-17 16:55:08.520095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.520187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.520215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.520357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.520401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.520552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.520598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.520715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.520741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.520852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.520879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.520966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.520992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.521094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.521121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.521224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.521253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.521386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.521415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.521566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.521594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.521706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.521734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.521829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.521858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.521994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.522029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.522135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.522161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.522236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.522262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.522387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.522435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.522594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.522625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.522726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.522756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.522874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.522904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.523016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.523044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.523145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.523184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.523302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.523334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.523469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.523513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.523611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.523640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.523752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.523785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.523896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.523921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.524024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.524051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.524139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.524165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.524252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.524277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.524394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.524422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.524526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.524555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.524704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.524733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.524817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.524860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.524948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.524976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.525087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.525114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.525202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.525229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.525334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.117 [2024-10-17 16:55:08.525362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.117 qpair failed and we were unable to recover it. 00:26:55.117 [2024-10-17 16:55:08.525464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.525506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.525631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.525663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.525783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.525811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.525928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.525954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.526065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.526092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.526178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.526204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.526305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.526335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.526423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.526452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.526541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.526570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.526735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.526793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.526888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.526916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.527037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.527065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.527172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.527200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.527316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.527347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.527448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.527476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.527577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.527608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.527758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.527787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.527895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.527924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.528064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.528100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.528203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.528248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.528359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.528388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.528523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.528552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.528640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.528668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.528758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.528802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.528924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.528950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.529049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.529076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.529168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.529213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.529352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.529381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.529502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.529531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.529618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.529649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.118 qpair failed and we were unable to recover it. 00:26:55.118 [2024-10-17 16:55:08.529760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.118 [2024-10-17 16:55:08.529789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.529903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.529931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.530048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.530079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.530179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.530206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.530327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.530373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.530482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.530509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.530589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.530615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.530745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.530773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.530852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.530879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.530971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.530997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.531092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.531118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.531195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.531221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.531304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.531331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.531417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.531444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.531553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.531580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.531725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.531750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.531865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.531894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.531981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.532015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.532137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.532167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.532257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.532288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.532386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.532416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.532544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.532574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.532698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.532744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.532881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.532907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.533008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.533034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.533128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.533156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.533268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.533300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.533423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.533468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.533551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.533579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.533767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.533796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.533931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.533958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.534080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.534111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.534208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.119 [2024-10-17 16:55:08.534238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.119 qpair failed and we were unable to recover it. 00:26:55.119 [2024-10-17 16:55:08.534337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.534365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.534486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.534514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.534661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.534716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.534847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.534877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.534982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.535031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.535146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.535191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.535297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.535328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.535487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.535514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.535613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.535641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.535792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.535820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.535938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.535966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.536083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.536122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.536227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.536266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.536404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.536435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.536562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.536591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.536726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.536773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.536905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.536931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.537054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.537082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.537197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.537245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.537406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.537437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.537557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.537586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.537727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.537773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.537890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.537921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.538041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.538068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.538157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.538183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.538276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.538303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.538414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.538440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.538553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.538580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.538661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.538689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.538798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.538824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.538966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.538995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.539087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.539113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.539196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.539221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.539314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.120 [2024-10-17 16:55:08.539341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.120 qpair failed and we were unable to recover it. 00:26:55.120 [2024-10-17 16:55:08.539423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.539450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.539552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.539577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.539709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.539738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.539856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.539885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.539969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.539998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.540142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.540172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.540278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.540315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.540405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.540435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.540567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.540599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.540697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.540724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.540845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.540885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.541037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.541066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.541161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.541206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.541311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.541339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.541458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.541497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.541648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.541691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.541793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.541821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.541939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.541966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.542078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.542109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.542203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.542232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.542375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.542401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.542552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.542599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.542739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.542767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.542854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.542882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.543006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.543033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.543116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.543141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.543263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.543292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.543398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.543438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.543559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.543593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.543720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.543749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.543859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.543890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.544025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.544068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.544164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.544191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.544305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.544331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.544498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.544543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.544652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.121 [2024-10-17 16:55:08.544697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-10-17 16:55:08.544823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.544866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.544978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.545011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.545105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.545131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.545225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.545251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.545357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.545387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.545566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.545595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.545726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.545756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.545913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.545956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.546128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.546168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.546307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.546354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.546455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.546482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.546591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.546636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.546726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.546753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.546874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.546902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.546990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.547026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.547138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.547165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.547254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.547280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.547365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.547402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.547494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.547520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.547615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.547641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.547741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.547780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.547869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.547896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.547984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.548017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.548170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.548197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.548299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.548328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.548421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.548449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.548592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.548642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.548749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.548778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.548886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.548925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.549048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.549076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.549190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.549216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.549330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.549359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.549452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.549486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.549576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.549605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.549695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.549725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.549856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.549889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-10-17 16:55:08.550024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.122 [2024-10-17 16:55:08.550070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.550187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.550216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.550338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.550383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.550540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.550585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.550666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.550692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.550777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.550803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.550909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.550948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.551073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.551101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.551188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.551214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.551350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.551378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.551523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.551570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.551710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.551758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.551872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.551900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.552007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.552065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.552233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.552264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.552371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.552400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.552528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.552556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.552682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.552712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.552859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.552884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.553005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.553031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.553120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.553146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.553229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.553274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.553365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.553393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.553501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.553531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.553617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.553646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.553758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.553787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.553886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.553929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.554047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.554095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.554195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.554225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.554321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.123 [2024-10-17 16:55:08.554350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.123 qpair failed and we were unable to recover it. 00:26:55.123 [2024-10-17 16:55:08.554440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.554469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.554609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.554637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.554725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.554753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.554904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.554932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.555075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.555102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.555177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.555221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.555344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.555373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.555504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.555533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.555666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.555695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.555822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.555851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.555991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.556026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.556144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.556170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.556300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.556345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.556474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.556503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.556605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.556632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.556741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.556773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.556936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.556963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.557071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.557111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.557237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.557276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.557396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.557428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.557612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.557666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.557757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.557783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.557872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.557898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.558023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.558082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.558213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.558243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.558333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.558374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.558475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.558504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.558588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.558617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.558717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.558745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.558855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.558881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.558989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.559021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.559106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.559131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.559254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.559283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.124 qpair failed and we were unable to recover it. 00:26:55.124 [2024-10-17 16:55:08.559412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.124 [2024-10-17 16:55:08.559441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.559559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.559603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.559721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.559750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.559866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.559892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.560015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.560043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.560180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.560226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.560353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.560379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.560476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.560503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.560618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.560643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.560726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.560752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.560851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.560877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.560961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.560991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.561101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.561128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.561224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.561255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.561399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.561447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.561562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.561594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.561722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.561768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.561894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.561921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.562065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.562091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.562215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.562261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.562399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.562444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.562557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.562583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.562721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.562748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.562840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.562866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.562954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.562980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.563091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.563117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.563221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.563246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.563365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.563395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.563499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.563530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.563684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.563731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.563830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.563869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.564012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.564041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.564144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.564184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.564288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.564317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.564500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.564530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.564700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.564747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.564863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.564890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.564983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.565025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.565166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.565195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.125 [2024-10-17 16:55:08.565332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.125 [2024-10-17 16:55:08.565361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.125 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.565448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.565476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.565577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.565605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.565704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.565732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.565858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.565883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.566020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.566067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.566188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.566215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.566341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.566369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.566495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.566524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.566643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.566671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.566792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.566821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.566914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.566942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.567045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.567089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.567182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.567210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.567310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.567338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.567475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.567510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.567632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.567661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.567788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.567816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.567954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.567980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.568079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.568106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.568209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.568253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.568350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.568393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.568513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.568560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.568680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.568708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.568798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.568833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.568959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.568987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.569102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.569131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.569240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.569270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.569418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.569463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.569621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.569666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.569803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.569830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.569915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.569941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.570085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.570113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.570266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.570303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.570412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.570437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.126 [2024-10-17 16:55:08.570602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.126 [2024-10-17 16:55:08.570631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.126 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.570754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.570783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.570906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.570934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.571067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.571111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.571233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.571262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.571395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.571423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.571529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.571558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.571697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.571726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.571846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.571874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.572015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.572044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.572205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.572250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.572420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.572463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.572588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.572632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.572743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.572770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.572891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.572917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.573021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.573048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.573179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.573224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.573343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.573369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.573466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.573494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.573633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.573660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.573751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.573782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.573863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.573889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.574015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.574041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.574126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.574152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.574277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.574306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.574422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.574450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.574535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.574564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.574684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.574713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.574855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.574900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.575026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.575083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.575197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.575228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.575350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.575379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.575550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.575598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.575753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.575790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.575935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.575962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.576056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.576095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.576211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.576237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.576428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.576476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.127 [2024-10-17 16:55:08.576630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.127 [2024-10-17 16:55:08.576664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.127 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.576825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.576873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.577007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.577038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.577171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.577197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.577324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.577364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.577484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.577513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.577620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.577664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.577785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.577816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.577912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.577942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.578100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.578141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.578264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.578293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.578417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.578460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.578627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.578673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.578792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.578822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.578952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.578981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.579101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.579126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.579223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.579253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.579397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.579444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.579536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.579565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.579681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.579709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.579835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.579861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.579994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.580047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.580170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.580223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.580330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.580361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.580499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.580550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.580667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.580696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.580793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.580821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.580954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.580998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.581139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.581169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.581268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.581298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.581411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.581437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.581586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.581633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.581719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.581760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.581904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.128 [2024-10-17 16:55:08.581931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.128 qpair failed and we were unable to recover it. 00:26:55.128 [2024-10-17 16:55:08.582053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.582082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.582195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.582225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.582388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.582435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.582570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.582615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.582730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.582757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.582900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.582926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.583038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.583077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.583209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.583248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.583371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.583398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.583513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.583558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.583653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.583682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.583819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.583847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.583963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.583991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.584108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.584138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.584235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.584263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.584389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.584425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.584521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.584551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.584644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.584673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.584831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.584857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.584951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.584981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.585096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.585125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.585216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.585244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.585362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.585412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.585552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.585600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.585735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.585771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.585930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.585956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.586050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.586078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.586163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.586191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.586336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.586365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.586489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.586559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.586750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.586779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.586908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.586934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.587050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.587096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.587188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.587216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.587314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.587340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.129 [2024-10-17 16:55:08.587433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.129 [2024-10-17 16:55:08.587460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.129 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.587611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.587661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.587809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.587840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.587988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.588026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.588159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.588184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.588325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.588351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.588466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.588510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.588644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.588676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.588829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.588859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.588950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.588979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.589102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.589128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.589265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.589291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.589406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.589448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.589577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.589605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.589700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.589728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.589836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.589862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.589972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.589997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.590098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.590124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.590235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.590260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.590402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.590430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.590532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.590558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.590678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.590707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.590831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.590861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.590958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.590983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.591088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.591113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.591195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.591220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.591336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.591366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.591487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.591517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.591611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.591640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.591787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.591816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.591900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.591929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.592092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.592132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.592278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.592336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.592437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.592464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.592606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.592656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.130 qpair failed and we were unable to recover it. 00:26:55.130 [2024-10-17 16:55:08.592765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.130 [2024-10-17 16:55:08.592791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.592936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.592963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.593054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.593081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.593167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.593192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.593280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.593305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.593389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.593415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.593563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.593592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.593740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.593768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.593874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.593902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.593987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.594026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.594162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.594206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.594323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.594368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.594502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.594545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.594691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.594739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.594880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.594907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.595033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.595073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.595167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.595195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.595338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.595364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.595507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.595533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.595646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.595671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.595756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.595784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.595907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.595935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.596063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.596089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.596223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.596248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.596440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.596488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.596610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.596638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.596766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.596797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.596923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.596963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.597100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.597131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.597268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.597313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.597473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.597518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.597623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.597667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.597805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.597832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.597982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.598018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.598140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.598166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.598281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.598318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.131 qpair failed and we were unable to recover it. 00:26:55.131 [2024-10-17 16:55:08.598417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.131 [2024-10-17 16:55:08.598448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.598571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.598601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.598730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.598759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.598885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.598920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.599060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.599087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.599170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.599199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.599328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.599372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.599527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.599571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.599699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.599729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.599880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.599906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.599988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.600024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.600162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.600206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.600332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.600376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.600533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.600577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.600684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.600710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.600797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.600824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.600911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.600938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.601107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.601146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.601267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.601295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.601389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.601414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.601496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.601522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.601631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.601657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.601763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.601789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.601889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.601916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.602036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.602064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.602207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.602233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.602387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.602416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.602565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.602594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.602719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.602747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.602858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.602885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.602998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.603038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.603180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.603206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.603360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.603388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.132 [2024-10-17 16:55:08.603504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.132 [2024-10-17 16:55:08.603532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.132 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.603654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.603682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.603800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.603842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.603950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.603975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.604119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.604144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.604255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.604297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.604485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.604513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.604610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.604638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.604754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.604782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.604897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.604925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.605046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.605072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.605191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.605218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.605354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.605380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.605537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.605564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.605684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.605712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.605896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.605924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.606090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.606115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.606195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.606220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.606340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.606365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.606444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.606485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.606610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.606638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.606756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.606783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.606907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.606936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.607075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.607114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.607234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.607267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.607380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.607406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.607521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.607549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.607670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.607711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.607829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.607857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.607937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.607965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.608078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.608104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.608244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.608270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.608385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.608411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.133 qpair failed and we were unable to recover it. 00:26:55.133 [2024-10-17 16:55:08.608559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.133 [2024-10-17 16:55:08.608587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.608670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.608699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.608822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.608853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.609017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.609043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.609126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.609151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.609274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.609310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.609484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.609540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.609668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.609697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.609791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.609819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.609918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.609943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.610029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.610055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.610168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.610194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.610360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.610388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.610510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.610537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.610635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.610663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.610839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.610896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.610997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.611032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.611173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.611200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.611329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.611379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.611506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.611550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.611674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.611718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.611810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.611837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.611931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.611957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.612075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.612102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.612190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.612215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.612296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.612322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.612430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.612457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.612570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.612598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.612713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.612739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.612860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.612887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.612968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.612995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.613135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.613161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.613248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.613273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.613358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.613383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.613519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.613544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.613631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.613656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.613795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.613822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.613913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.134 [2024-10-17 16:55:08.613940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.134 qpair failed and we were unable to recover it. 00:26:55.134 [2024-10-17 16:55:08.614089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.614133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.614238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.614268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.614385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.614413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.614538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.614567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.614667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.614696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.614824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.614852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.614990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.615024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.615177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.615207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.615317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.615359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.615477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.615505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.615644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.615672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.615813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.615839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.615928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.615954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.616073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.616099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.616180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.616225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.616320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.616350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.616498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.616526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.616641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.616669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.616753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.616794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.616911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.616936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.617038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.617068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.617180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.617205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.617335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.617363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.617455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.617483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.617574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.617602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.617720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.617748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.617873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.617902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.617985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.618027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.618144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.618172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.618268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.618297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.618418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.618447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.618568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.618596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.618688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.618716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.618838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.618866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.618979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.619018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.619141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.135 [2024-10-17 16:55:08.619168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.135 qpair failed and we were unable to recover it. 00:26:55.135 [2024-10-17 16:55:08.619285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.619311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.619422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.619449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.619566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.619593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.619682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.619709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.619797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.619823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.619935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.619962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.620084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.620111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.620226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.620260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.620399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.620426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.620542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.620569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.620648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.620675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.620794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.620823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.620940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.620965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.621052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.621078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.621184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.621209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.621292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.621317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.621450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.621477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.621598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.621626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.621720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.621748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.621877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.621902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.622012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.622038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.622149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.622175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.622314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.622342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.622461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.622489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.622609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.622637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.622744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.622790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.622925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.622950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.623090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.623135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.623266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.623296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.623401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.623427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.623556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.623599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.623687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.623712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.623808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.623835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.623916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.623941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.624064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.624092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.624206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.624232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.136 [2024-10-17 16:55:08.624351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.136 [2024-10-17 16:55:08.624376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.136 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.624485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.624511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.624620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.624650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.624774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.624800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.624945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.624972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.625084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.625113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.625295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.625338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.625425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.625451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.625533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.625560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.625672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.625697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.625816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.625842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.625939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.625964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.626115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.626144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.626277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.626302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.626418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.626443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.626562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.626589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.626707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.626734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.626822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.626847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.626976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.627019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.627097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.627123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.627251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.627276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.627364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.627390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.627538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.627565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.627701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.627727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.627842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.627867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.627981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.628013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.628117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.628147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.628306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.628350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.628483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.628530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.628636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.137 [2024-10-17 16:55:08.628666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.137 qpair failed and we were unable to recover it. 00:26:55.137 [2024-10-17 16:55:08.628756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.628783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.628869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.628896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.629019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.629046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.629134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.629161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.629267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.629306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.629415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.629441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.629555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.629580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.629688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.629713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.629817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.629842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.629948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.629974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.630111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.630157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.630293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.630323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.630472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.630516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.630638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.630663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.630752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.630778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.630864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.630889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.630966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.630991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.631079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.631104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.631185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.631210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.631299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.631326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.631436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.631461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.631545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.631573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.631663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.631688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.631780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.631807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.631912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.631938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.632031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.632057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.632153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.632180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.632301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.632328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.632411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.632437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.632550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.632576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.632691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.632716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.632813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.632852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.633014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.633042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.633160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.633186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.633339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.633367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.633484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.633512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.633616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.633644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.633777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.138 [2024-10-17 16:55:08.633802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.138 qpair failed and we were unable to recover it. 00:26:55.138 [2024-10-17 16:55:08.633891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.633916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.634044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.634085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.634200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.634228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.634376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.634404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.634519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.634547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.634668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.634696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.634795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.634824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.634955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.634983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.635103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.635132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.635214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.635240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.635400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.635429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.635524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.635549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.635659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.635685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.635773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.635799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.635950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.635979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.636113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.636139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.636253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.636279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.636410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.636439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.636587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.636615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.636740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.636768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.636899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.636924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.637039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.637065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.637157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.637182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.637289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.637314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.637411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.637436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.637517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.637558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.637672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.637700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.637800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.637830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.637950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.637983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.638095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.638121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.638208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.638233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.638323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.638365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.638463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.638491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.638644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.638672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.638795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.638837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.638915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.638941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.639025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.639051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.639189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.639214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.639343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.639372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.639519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.639547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.639641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.639671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.639797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.639825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.639989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.640021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.139 qpair failed and we were unable to recover it. 00:26:55.139 [2024-10-17 16:55:08.640128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.139 [2024-10-17 16:55:08.640170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.640267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.640293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.640442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.640470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.640572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.640600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.640719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.640747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.640894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.640923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.641031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.641057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.641162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.641190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.641279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.641308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.641434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.641462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.641558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.641586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.641674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.641704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.641790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.641817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.641967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.642020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.642143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.642171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.642308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.642352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.642466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.642493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.642608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.642633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.642748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.642774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.642860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.642886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.642998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.643031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.643157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.643185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.643420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.643469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.643593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.643621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.643736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.643765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.643905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.643933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.644083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.644110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.644196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.644240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.644337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.644363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.644499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.644543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.644632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.644659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.644774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.644799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.644918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.644945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.645073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.645099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.645237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.645280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.645396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.645425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.645558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.645587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.645747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.645789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.645903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.645928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.646064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.646099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.646224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.646253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.646404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.140 [2024-10-17 16:55:08.646432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.140 qpair failed and we were unable to recover it. 00:26:55.140 [2024-10-17 16:55:08.646538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.646568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.646684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.646712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.646817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.646846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.646934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.646961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.647101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.647126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.647263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.647306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.647407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.647436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.647596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.647624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.647774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.647802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.647925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.647953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.648104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.648144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.648267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.648306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.648428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.648455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.648611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.648640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.648800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.648829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.648952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.648982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.649095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.649122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.649205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.649231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.649337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.649362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.649486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.649514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.649633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.649662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.649794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.649822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.649947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.649976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.650124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.650150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.650253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.650278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.650385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.650428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.650547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.650575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.650671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.650699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.650785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.650832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.650972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.650997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.651118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.651144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.651302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.651330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.651472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.651500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.651594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.651622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.651712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.651740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.651855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.651883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.652031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.652057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.652214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.652239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.652455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.652481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.652601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.652626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.652774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.652798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.652952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.652977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.141 qpair failed and we were unable to recover it. 00:26:55.141 [2024-10-17 16:55:08.653079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.141 [2024-10-17 16:55:08.653105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.653186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.653211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.653323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.653348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.653463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.653488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.653621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.653649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.653805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.653833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.653951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.653979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.654118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.654143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.654280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.654305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.654393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.654419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.654570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.654598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.654746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.654774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.654924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.654952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.655099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.655125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.655246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.655274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.655405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.655434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.655577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.655605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.655777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.655836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.655936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.655963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.656079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.656118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.656220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.656248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.656356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.656386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.656572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.656601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.656699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.656729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.656875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.656903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.657051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.142 [2024-10-17 16:55:08.657078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.142 qpair failed and we were unable to recover it. 00:26:55.142 [2024-10-17 16:55:08.657190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.657216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.657362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.657388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.657546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.657589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.657688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.657716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.657797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.657840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.657955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.657980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.658078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.658106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.658185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.658211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.658318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.658346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.658438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.658468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.658596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.658625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.658798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.658826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.658933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.658962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.659111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.659137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.659221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.659267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.659353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.659383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.659507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.659537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.659663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.659692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.659772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.659800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.659951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.659980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.660132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.660157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.660276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.660301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.660447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.660476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.660628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.660656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.660790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.660819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.660917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.660945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.661098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.661125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.661249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.661278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.661427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.661455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.661587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.661615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.661738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.661766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.661892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.661921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.662063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.662102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.662213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.662243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.662334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.662363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.662481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.662510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.662633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.662662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.662764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.662798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.662926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.662955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.663082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.663112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.663247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.663292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.663432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.143 [2024-10-17 16:55:08.663476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.143 qpair failed and we were unable to recover it. 00:26:55.143 [2024-10-17 16:55:08.663605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.663649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.663735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.663763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.663867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.663894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.664014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.664059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.664181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.664210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.664321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.664346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.664495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.664523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.664613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.664641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.664822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.664851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.664977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.665016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.665129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.665155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.665288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.665317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.665440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.665468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.665593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.665621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.665718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.665758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.665858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.665887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.665983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.666021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.666122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.666148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.666289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.666315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.666432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.666460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.666610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.666639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.666772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.666801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.666909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.666939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.667071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.667110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.667262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.667290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.667401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.667431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.667576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.667619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.667707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.667733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.667828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.667855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.667931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.667958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.668106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.668145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.668266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.668293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.668376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.668404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.668498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.668524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.668639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.668664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.668779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.668805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.668916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.668941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.669035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.669061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.669172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.669197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.669307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.669335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.669438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.669466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.669583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.669612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.669754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.669785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.669923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.669950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.670104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.670143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.670260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.670291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.670389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.670418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.144 [2024-10-17 16:55:08.670545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.144 [2024-10-17 16:55:08.670574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.144 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.670699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.670751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.670845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.670874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.671018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.671044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.671136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.671180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.671274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.671303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.671464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.671493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.671650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.671699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.671821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.671849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.671985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.672020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.672113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.672140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.672222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.672267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.672371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.672401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.672587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.672636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.672762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.672791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.672932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.672970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.673080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.673107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.673195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.673237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.673359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.673397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.673560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.673585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.673694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.673720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.673813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.673839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.673961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.673989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.674095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.674123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.674205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.674232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.674324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.674350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.674452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.674496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.674616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.674642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.674756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.674782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.674877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.674903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.675022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.675052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.675173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.675199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.675317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.675344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.675430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.675457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.675567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.675613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.675717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.675761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.675844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.675872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.675982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.676017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.676133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.676160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.676274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.676302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.676456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.676483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.676601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.676627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.676772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.676803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.676894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.676919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.677041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.677068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.677154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.677181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.677337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.677366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.677461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.145 [2024-10-17 16:55:08.677489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.145 qpair failed and we were unable to recover it. 00:26:55.145 [2024-10-17 16:55:08.677576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.677605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.677705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.677734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.677852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.677881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.678048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.678087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.678211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.678239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.678366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.678411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.678541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.678570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.678696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.678725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.678823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.678852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.678944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.678970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.679077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.679126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.679241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.679268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.679366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.679394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.679489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.679518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.679605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.679634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.679725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.679754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.679847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.679876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.679965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.679994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.680117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.680145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.680280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.680324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.680454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.680499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.680600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.680635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.680779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.680807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.680928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.680954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.681049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.681076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.681239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.681268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.681414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.681442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.681575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.681620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.681810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.681861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.682024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.682067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.682158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.682185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.682286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.682315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.682438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.682466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.682549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.682577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.682762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.682808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.682906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.682933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.683048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.683076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.683166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.683192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.683305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.683331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.683448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.146 [2024-10-17 16:55:08.683477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.146 qpair failed and we were unable to recover it. 00:26:55.146 [2024-10-17 16:55:08.683629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.683658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.683776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.683804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.683960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.683986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.684085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.684113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.684253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.684279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.684384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.684413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.684538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.684569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.684689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.684718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.684831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.684860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.684971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.684998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.685119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.685165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.685303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.685333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.685452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.685499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.685622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.685649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.685767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.685795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.685901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.685940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.686067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.686094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.686209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.686236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.686319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.686345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.686427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.686453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.686598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.686650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.686759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.686805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.686928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.686956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.687101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.687128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.687264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.687292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.687432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.687460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.687560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.687589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.687759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.687805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.687916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.687942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.688059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.688087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.688174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.688200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.688326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.688355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.688485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.688514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.688645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.688671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.688757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.688783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.688909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.688939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.689022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.689049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.689135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.689162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.689249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.689291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.689411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.689441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.689569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.689598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.689730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.689760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.689931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.689959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.690087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.690115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.690225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.690254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.690387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.690435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.690580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.690624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.147 [2024-10-17 16:55:08.690767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.147 [2024-10-17 16:55:08.690793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.147 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.690906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.690938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.691025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.691052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.691169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.691195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.691347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.691383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.691498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.691528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.691648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.691677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.691775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.691818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.691912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.691940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.692091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.692117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.692263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.692292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.692429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.692457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.692563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.692592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.692706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.692735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.692857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.692886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.693017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.693061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.693154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.693183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.693278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.693306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.693402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.693431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.693554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.693582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.693668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.693695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.693831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.693860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.694036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.694065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.694180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.694224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.694331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.694360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.694530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.694578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.694709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.694757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.694844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.694871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.694974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.695013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.695128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.695155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.695251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.695277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.695387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.695415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.695502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.695530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.695622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.695651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.695794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.695824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.695927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.695953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.696043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.696069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.696152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.696178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.696285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.696318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.696472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.696498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.696595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.696622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.696703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.696729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.696824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.696863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.696983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.697019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.697145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.697172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.697255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.697282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.697408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.697438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.697540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.697569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.148 [2024-10-17 16:55:08.697700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.148 [2024-10-17 16:55:08.697730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.148 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.697842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.697868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.697998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.698030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.698116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.698141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.698259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.698288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.698385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.698413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.698504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.698532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.698660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.698692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.698781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.698810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.698945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.698976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.699094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.699124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.699216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.699242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.699377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.699420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.699568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.699612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.699719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.699763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.699905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.699932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.700025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.700052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.700170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.700199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.700317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.700346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.700527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.700577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.700675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.700705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.700841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.700871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.701007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.701036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.701149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.701176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.701312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.701341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.701474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.701501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.701594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.701622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.701731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.701759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.701890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.701929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.702022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.702050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.702135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.702161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.702273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.702299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.702403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.702429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.702513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.702539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.702623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.702650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.702747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.702773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.702850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.702877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.702963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.702990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.703076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.703121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.703242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.703270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.703396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.703426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.703532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.703561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.703685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.703715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.703810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.703839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.703966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.703995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.149 [2024-10-17 16:55:08.704201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.149 [2024-10-17 16:55:08.704230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.149 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.704353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.704381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.704478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.704511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.704669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.704718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.704803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.704830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.704971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.704999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.705100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.705126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.705233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.705262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.705359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.705384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.705510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.705541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.705639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.705667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.705757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.705785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.705889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.705914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.706055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.706084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.706193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.706219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.706340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.706369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.706464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.706493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.706615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.706645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.706758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.706808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.706944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.706971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.707091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.707118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.707202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.707227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.707364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.707392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.707486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.707528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.707661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.707690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.707810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.707839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.707988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.708029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.708159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.708195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.708309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.708335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.708423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.708454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.708565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.708594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.708742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.708770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.708894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.708922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.709093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.709119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.709236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.709262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.709349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.709393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.709473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.709501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.709631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.709660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.709758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.709786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.709900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.709929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.710069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.710096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.710205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.710248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.710354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.710383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.710483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.710511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.150 [2024-10-17 16:55:08.710632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.150 [2024-10-17 16:55:08.710660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.150 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.710748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.710776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.710928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.710968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.711098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.711127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.711219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.711247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.711372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.711420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.711528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.711573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.711666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.711693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.711808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.711834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.711973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.711998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.712095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.712120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.712207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.712232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.712344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.712375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.712495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.712520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.712628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.712657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.712777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.712805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.712941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.712984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.713139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.713167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.713273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.713304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.713451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.713480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.713664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.713717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.713808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.713835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.713931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.713959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.714059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.714085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.714179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.714204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.714297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.714326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.714472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.714519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.714683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.714711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.714811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.714838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.714931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.714969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.715102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.715148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.715297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.715328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.715471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.715563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.715691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.715735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.715849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.715875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.715990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.716032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.716122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.716148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.716260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.716308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.716479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.716532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.716659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.716709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.716851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.716877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.716995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.717029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.717143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.717169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.717313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.717340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.717528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.717575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.717670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.717698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.717829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.717855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.717977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.718020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.151 [2024-10-17 16:55:08.718136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.151 [2024-10-17 16:55:08.718162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.151 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.718269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.718295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.718441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.718469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.718596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.718624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.718715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.718743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.718883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.718909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.718988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.719020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.719142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.719170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.719319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.719348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.719437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.719465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.719590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.719628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.719782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.719814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.719922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.719949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.720092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.720119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.720206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.720233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.720365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.720409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.720558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.720588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.720679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.720708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.720833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.720869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.721021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.721051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.721157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.721186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.721323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.721352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.721506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.721553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.721688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.721732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.721831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.721857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.722014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.722041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.722140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.722169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.722272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.722299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.722416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.722442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.722551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.722578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.722660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.722686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.722803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.722830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.722936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.722982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.723112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.723140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.723229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.723255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.723366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.723392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.723483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.723509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.723595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.723621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.723725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.723750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.723873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.723899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.723976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.724010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.724143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.724171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.724266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.724294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.724385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.724414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.724569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.724597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.724738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.724767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.724852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.724881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.152 [2024-10-17 16:55:08.725015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.152 [2024-10-17 16:55:08.725044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.152 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.725182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.725228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.725395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.725437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.725604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.725651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.725768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.725795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.725890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.725916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.726020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.726064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.726219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.726247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.726339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.726367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.726526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.726552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.726680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.726708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.726831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.726859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.726967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.726993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.727091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.727135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.727221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.727249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.727346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.727373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.727475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.727500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.727643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.727671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.727798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.727823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.727911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.727955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.728092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.728121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.728237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.728265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.728384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.728412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.728510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.728538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.728688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.728734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.728828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.728855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.728962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.729013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.729129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.729174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.729300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.729344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.729430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.729455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.729574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.729601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.729727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.729753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.729838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.729865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.729958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.729984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.730106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.730133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.730222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.730264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.730353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.730381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.730507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.730536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.730638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.730666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.730765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.730814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.730929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.730955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.731045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.731071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.731212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.731244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.153 [2024-10-17 16:55:08.731395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.153 [2024-10-17 16:55:08.731444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.153 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.731526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.731552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.731678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.731707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.731814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.731841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.731953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.731979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.732105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.732133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.732216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.732242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.732325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.732350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.732427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.732453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.732597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.732623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.732716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.732755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.732848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.732876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.732973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.733006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.733129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.733156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.733244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.733272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.733357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.733384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.733498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.733525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.733640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.733668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.733765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.733791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.733907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.733934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.734014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.734058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.734154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.734182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.734306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.734342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.734458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.734487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.734634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.734662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.734764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.734793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.734909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.734951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.735037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.735064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.735171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.735199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.735317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.735346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.735432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.735461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.735611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.735657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.735795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.735847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.735957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.735983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.736108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.736152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.736239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.736266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.736359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.736387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.736495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.736522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.736661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.736688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.736768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.736794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.736895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.736923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.737011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.737037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.737160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.737186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.737323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.737349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.737461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.737488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.737601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.737638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.154 [2024-10-17 16:55:08.737738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.154 [2024-10-17 16:55:08.737768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.154 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.737915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.737953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.738121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.738152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.738281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.738316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.738413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.738441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.738547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.738577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.738661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.738690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.738800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.738826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.738923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.738962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.739101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.739130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.739273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.739303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.739399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.739429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.739560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.739588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.739702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.739748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.739879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.739906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.740024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.740051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.740147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.740177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.740276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.740305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.740441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.740470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.740572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.740602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.740699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.740727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.740843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.740887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.740997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.741028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.741132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.741161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.741247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.741276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.741414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.741442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.741591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.741620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.741754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.741797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.742012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.742058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.742197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.742243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.742397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.742448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.742567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.742618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.742786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.742834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.742947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.742973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.743077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.743104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.743226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.743252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.743393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.743420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.743540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.743566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.743684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.743709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.743835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.743877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.743996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.744032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.744126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.744153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.744241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.744267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.744382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.744433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.744534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.744562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.744671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.744699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.155 [2024-10-17 16:55:08.744799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.155 [2024-10-17 16:55:08.744839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.155 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.744947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.744986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.745114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.745145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.745322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.745366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.745498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.745542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.745668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.745712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.745793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.745821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.745933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.745972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.746117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.746157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.746289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.746324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.746432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.746461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.746589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.746620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.746768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.746817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.746987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.747022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.747113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.747140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.747224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.747250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.747414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.747443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.747535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.747566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.747653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.747682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.747810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.747841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.747976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.748022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.748157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.748205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.748345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.748377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.748493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.748523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.748704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.748736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.748859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.748903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.749022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.749048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.749153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.749184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.749282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.749313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.749465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.749494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.749620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.749655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.749776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.749802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.749945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.749971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.750065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.750091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.750211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.750237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.750400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.750450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.750562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.750604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.750727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.750756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.750963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.750991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.751155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.751183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.751313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.751344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.751476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.751504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.751595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.751624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.751742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.751771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.751866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.751896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.752029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.752055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.752140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.752166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.752249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.156 [2024-10-17 16:55:08.752294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.156 qpair failed and we were unable to recover it. 00:26:55.156 [2024-10-17 16:55:08.752429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.752459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.752556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.752586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.752686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.752711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.752848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.752888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.752992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.753029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.753131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.753161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.753293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.753344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.753487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.753524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.753660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.753686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.753882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.753909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.754034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.754060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.754199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.754225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.754311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.754337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.754448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.754500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.754639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.754688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.754806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.754832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.754976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.755019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.755150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.755180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.755285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.755312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.755436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.755463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.755557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.755583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.755683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.755709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.755816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.755855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.755947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.755974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.756074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.756101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.756187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.756214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.756307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.756333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.756439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.756466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.756545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.756572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.756707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.756747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.756840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.756867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.756949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.756976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.757097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.757124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.757240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.757267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.757428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.757456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.757609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.757635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.757756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.757784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.757903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.757939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.758045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.758072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.758160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.758187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.758266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.758311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.157 [2024-10-17 16:55:08.758439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.157 [2024-10-17 16:55:08.758467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.157 qpair failed and we were unable to recover it. 00:26:55.158 [2024-10-17 16:55:08.758587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-10-17 16:55:08.758615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-10-17 16:55:08.758706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-10-17 16:55:08.758740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-10-17 16:55:08.758908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-10-17 16:55:08.758934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-10-17 16:55:08.759014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-10-17 16:55:08.759041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-10-17 16:55:08.759120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-10-17 16:55:08.759164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-10-17 16:55:08.759291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.158 [2024-10-17 16:55:08.759331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.158 qpair failed and we were unable to recover it. 00:26:55.158 [2024-10-17 16:55:08.759416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.434 [2024-10-17 16:55:08.759444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.434 qpair failed and we were unable to recover it. 00:26:55.434 [2024-10-17 16:55:08.759566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.434 [2024-10-17 16:55:08.759597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.434 qpair failed and we were unable to recover it. 00:26:55.434 [2024-10-17 16:55:08.759701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.434 [2024-10-17 16:55:08.759730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.434 qpair failed and we were unable to recover it. 00:26:55.434 [2024-10-17 16:55:08.759845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.434 [2024-10-17 16:55:08.759890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.434 qpair failed and we were unable to recover it. 00:26:55.434 [2024-10-17 16:55:08.760034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.434 [2024-10-17 16:55:08.760064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.434 qpair failed and we were unable to recover it. 00:26:55.434 [2024-10-17 16:55:08.760179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.434 [2024-10-17 16:55:08.760237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.434 qpair failed and we were unable to recover it. 00:26:55.434 [2024-10-17 16:55:08.760389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.434 [2024-10-17 16:55:08.760437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.760544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.760588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.760717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.760746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.760861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.760887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.760977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.761011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.761100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.761126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.761253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.761279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.761391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.761417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.761506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.761532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.761616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.761642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.761733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.761758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.761872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.761898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.762037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.762062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.762148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.762173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.762264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.762294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.762386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.762426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.762536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.762586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.762721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.762767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.762858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.762883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.762996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.763029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.763115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.763141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.763249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.763274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.763357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.763383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.763474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.763501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.763642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.763671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.763803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.763843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.763940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.763968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.764101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.764127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.764225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.764251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.764355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.764380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.764472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.764498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.764611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.764639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.764732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.764761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.764895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.764933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.435 [2024-10-17 16:55:08.765038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.435 [2024-10-17 16:55:08.765066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.435 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.765145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.765171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.765250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.765276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.765381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.765409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.765496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.765523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.765611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.765640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.765726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.765752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.765854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.765880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.765987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.766019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.766103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.766135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.766243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.766270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.766349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.766376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.766502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.766546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.766635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.766662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.766753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.766778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.766896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.766922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.767044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.767071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.767158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.767183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.767280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.767309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.767428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.767454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.767570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.767597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.767682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.767709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.767796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.767825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.767927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.767953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.768084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.768110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.768205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.768231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.768354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.768380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.768474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.768500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.768603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.768630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.768755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.768782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.768920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.768948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.769042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.769069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.769154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.769180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.769257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.769283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.769430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.769455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.769572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.769600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.769727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.769756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.769848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.769876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.769966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.769992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.770084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.770111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.770200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.770226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.436 [2024-10-17 16:55:08.770322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.436 [2024-10-17 16:55:08.770347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.436 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.770442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.770469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.770556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.770583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.770701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.770727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.770815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.770842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.770952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.770978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.771099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.771127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.771216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.771243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.771332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.771365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.771496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.771534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.771629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.771657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.771763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.771803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.771926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.771954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.772077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.772105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.772219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.772245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.772363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.772388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.772470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.772495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.772599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.772624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.772716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.772744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.772854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.772893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.772988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.773025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.773122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.773148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.773244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.773270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.773411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.773436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.773536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.773565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.773647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.773673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.773767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.773798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.773896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.773922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.774039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.774066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.774175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.774202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.774290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.774344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.774473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.774502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.774661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.774690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.774790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.774821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.774929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.774961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.775120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.775148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.775242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.775267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.775362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.775391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.775529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.775572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.775702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.775749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.775861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.437 [2024-10-17 16:55:08.775887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.437 qpair failed and we were unable to recover it. 00:26:55.437 [2024-10-17 16:55:08.776022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.776048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.776141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.776170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.776255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.776281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.776392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.776428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.776526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.776552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.776637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.776663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.776781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.776809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.776896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.776927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.777019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.777047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.777166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.777193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.777324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.777353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.777469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.777498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.777614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.777644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.777740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.777782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.777909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.777948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.778063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.778092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.778208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.778235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.778331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.778357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.778460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.778488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.778610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.778639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.778725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.778754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.778906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.778950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.779079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.779108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.779198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.779226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.779324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.779375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.779591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.779623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.779738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.779767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.779915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.779944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.780087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.780127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.780224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.780253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.780382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.780411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.780558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.780611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.780699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.780725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.780845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.780885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.780973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.781007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.781099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.781125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.781220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.781247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.781366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.781393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.781477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.781504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.781615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.438 [2024-10-17 16:55:08.781641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.438 qpair failed and we were unable to recover it. 00:26:55.438 [2024-10-17 16:55:08.781775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.781805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.781935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.781967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.782106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.782133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.782215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.782241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.782381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.782411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.782531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.782575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.782668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.782697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.782819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.782848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.782976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.783020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.783106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.783131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.783225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.783251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.783336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.783361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.783439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.783465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.783578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.783603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.783706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.783746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.783955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.783994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.784109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.784136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.784221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.784248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.784387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.784416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.784529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.784555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.784674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.784708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.784816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.784845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.784975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.785010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.785111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.785138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.785218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.785262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.785358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.785391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.785490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.785521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.785610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.785639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.785742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.785770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.785928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.785956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.786105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.786132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.786226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.786253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.786389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.786417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.439 qpair failed and we were unable to recover it. 00:26:55.439 [2024-10-17 16:55:08.786534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.439 [2024-10-17 16:55:08.786576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.786665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.786700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.786816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.786844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.786951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.786978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.787070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.787098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.787208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.787247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.787387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.787418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.787536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.787565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.787697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.787726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.787877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.787916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.788044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.788070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.788170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.788196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.788285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.788310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.788439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.788467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.788582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.788610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.788709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.788739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.788848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.788874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.788988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.789023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.789123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.789150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.789238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.789285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.789378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.789408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.789551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.789600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.789713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.789751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.789922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.789952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.790096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.790122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.790203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.790229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.790316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.790361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.790451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.790479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.790632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.790687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.790824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.790854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.790942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.790984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.791113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.791142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.791239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.791265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.791382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.791415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.791573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.791626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.791727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.791756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.791843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.791870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.791993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.792031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.792157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.792200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.792292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.792320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.440 [2024-10-17 16:55:08.792450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.440 [2024-10-17 16:55:08.792481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.440 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.792599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.792627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.792781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.792810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.792907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.792936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.793085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.793114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.793232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.793260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.793357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.793385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.793511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.793540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.793662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.793691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.793843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.793872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.794020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.794049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.794156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.794185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.794333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.794358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.794483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.794526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.794660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.794704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.794825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.794853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.794945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.794972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.795075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.795114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.795232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.795275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.795433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.795482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.795588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.795618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.795727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.795753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.795839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.795866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.795988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.796026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.796113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.796142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.796243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.796271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.796390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.796435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.796572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.796616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.796733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.796763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.796903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.796942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.797050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.797077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.797163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.797189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.797293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.797322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.797421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.797450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.797622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.797654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2472890 Killed "${NVMF_APP[@]}" "$@" 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.797762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.797790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.797945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.797988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:55.441 [2024-10-17 16:55:08.798114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 [2024-10-17 16:55:08.798142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.798256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:55.441 [2024-10-17 16:55:08.798302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.441 qpair failed and we were unable to recover it. 00:26:55.441 [2024-10-17 16:55:08.798436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.441 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:55.441 [2024-10-17 16:55:08.798485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:55.442 [2024-10-17 16:55:08.798624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.798674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.442 [2024-10-17 16:55:08.798763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.798800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.798914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.798941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.799055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.799095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.799212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.799239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.799335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.799361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.799441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.799467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.799545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.799571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.799649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.799675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.799795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.799820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.799908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.799933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.800062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.800102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.800237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.800295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.800410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.800440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.800585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.800627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.800720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.800748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.800869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.800898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.801023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.801051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.801178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.801208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.801328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.801358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.801482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.801512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.801605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.801634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.801731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.801760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.801886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.801926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.802030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.802057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.802190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.802216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2473449 00:26:55.442 [2024-10-17 16:55:08.802336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.802362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2473449 00:26:55.442 [2024-10-17 16:55:08.802475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.802500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:55.442 [2024-10-17 16:55:08.802605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.802634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2473449 ']' 00:26:55.442 [2024-10-17 16:55:08.802743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.802772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.442 [2024-10-17 16:55:08.802873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.802905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:55.442 [2024-10-17 16:55:08.803026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.803054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.442 [2024-10-17 16:55:08.803141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.803167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:55.442 [2024-10-17 16:55:08.803261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.803305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 16:55:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.442 [2024-10-17 16:55:08.803451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.442 [2024-10-17 16:55:08.803488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.442 qpair failed and we were unable to recover it. 00:26:55.442 [2024-10-17 16:55:08.803619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.803648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.803744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.803775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.803876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.803906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.804051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.804079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.804193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.804219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.804341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.804369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.804509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.804540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.804634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.804666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.804779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.804808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.804921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.804948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.805037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.805063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.805143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.805169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.805250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.805275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.805400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.805431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.805539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.805565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.805713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.805742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.805841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.805870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.806021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.806061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.806189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.806216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.806337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.806367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.806511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.806543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.806693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.806740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.806841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.806870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.806995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.807032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.807135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.807161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.807257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.807283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.807396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.807429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.807584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.807631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.807747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.807790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.807916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.807946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.808056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.808083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.808200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.808226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.808338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.808367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.808483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.808512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.808596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.808625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.808719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.443 [2024-10-17 16:55:08.808758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.443 qpair failed and we were unable to recover it. 00:26:55.443 [2024-10-17 16:55:08.808880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.808909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.809029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.809069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.809191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.809222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.809375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.809419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.809551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.809596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.809703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.809733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.809849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.809893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.809996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.810055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.810170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.810197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.810325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.810351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.810442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.810469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.810619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.810647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.810754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.810785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.810924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.810963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.811112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.811140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.811280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.811318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.811459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.811507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.811623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.811676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.811829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.811859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.811969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.812016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.812138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.812166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.812305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.812359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.812474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.812521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.812633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.812664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.812763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.812788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.812908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.812934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.813072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.813111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.813208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.813235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.813326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.813351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.813433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.813458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.813579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.813604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.813697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.813723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.813809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.813834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.813932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.813961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.814080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.814107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.814220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.814248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.814343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.814372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.814474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.814503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.814599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.814627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.814707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.814735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.814831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.444 [2024-10-17 16:55:08.814859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.444 qpair failed and we were unable to recover it. 00:26:55.444 [2024-10-17 16:55:08.815029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.815055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.815131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.815156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.815244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.815269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.815397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.815442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.815558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.815587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.815731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.815775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.815875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.815903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.815997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.816069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.816180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.816223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.816332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.816363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.816507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.816551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.816659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.816689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.816809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.816850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.816931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.816956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.817066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.817095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.817217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.817245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.817423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.817457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.817556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.817586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.817684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.817709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.817830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.817856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.817957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.817995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.818098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.818144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.818246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.818275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.818366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.818394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.818476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.818505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.818625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.818670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.818782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.818808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.818894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.818920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.819036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.819062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.819151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.819177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.819298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.819325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.819470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.819498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.819590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.819618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.819717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.819743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.819831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.819856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.819939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.819964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.820051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.820077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.820171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.820195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.820314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.820340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.820433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.820458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.445 [2024-10-17 16:55:08.820591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.445 [2024-10-17 16:55:08.820636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.445 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.820726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.820753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.820863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.820890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.820975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.821093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.821189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.821216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.821321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.821347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.821456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.821482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.821556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.821582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.821671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.821697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.821836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.821861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.821974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.822006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.822105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.822144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.822263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.822292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.822385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.822411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.822523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.822549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.822631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.822658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.822763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.822802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.822931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.822958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.823080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.823108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.823226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.823254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.823415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.823458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.823569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.823595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.823682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.823709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.823809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.823835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.823912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.823939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.824062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.824089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.824189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.824216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.824368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.824393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.824492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.824519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.824625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.824666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.824804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.824832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.824934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.824962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.825071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.825098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.825207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.825234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.825361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.825387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.825483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.825510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.825592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.825618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.825706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.825733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.825820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.825847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.825939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.825966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.826087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.446 [2024-10-17 16:55:08.826114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.446 qpair failed and we were unable to recover it. 00:26:55.446 [2024-10-17 16:55:08.826199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.826227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.826310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.826337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.826429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.826461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.826537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.826563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.826645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.826671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.826754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.826782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.826870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.826897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.826980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.827009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.827142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.827168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.827253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.827280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.827380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.827406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.827527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.827553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.827693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.827719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.827808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.827834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.827944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.827970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.828110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.828137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.828252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.828280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.828423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.828466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.828574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.828601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.828743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.828769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.828885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.828914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.829012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.829039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.829132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.829158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.829269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.829295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.829383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.829409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.829517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.829542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.829619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.829644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.829737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.829763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.829883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.829922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.830052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.830080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.830195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.830222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.830306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.830331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.830422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.830447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.830521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.830547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.830684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.830709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.830795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.447 [2024-10-17 16:55:08.830820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.447 qpair failed and we were unable to recover it. 00:26:55.447 [2024-10-17 16:55:08.830904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.830929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.831028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.831054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.831172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.831197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.831286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.831311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.831414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.831439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.831519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.831544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.831627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.831652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.831743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.831769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.831889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.831914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.832033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.832058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.832161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.832187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.832294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.832333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.832455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.832484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.832602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.832628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.832715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.832742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.832817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.832843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.832919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.832945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.833032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.833058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.833146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.833172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.833255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.833281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.833365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.833391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.833530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.833555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.833630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.833656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.833779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.833807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.833907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.833934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.834022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.834051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.834136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.834163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.834297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.834324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.834418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.834444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.834537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.834564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.834658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.834685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.834811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.834837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.834949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.834976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.835073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.835118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.835246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.835274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.835364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.835391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.835484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.835511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.835598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.835624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.835742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.835768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.835847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.835873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.448 [2024-10-17 16:55:08.835966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.448 [2024-10-17 16:55:08.835993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.448 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.836113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.836140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.836237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.836263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.836405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.836431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.836578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.836604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.836689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.836717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.836821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.836860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.837023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.837052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.837146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.837173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.837307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.837334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.837443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.837470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.837557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.837584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.837702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.837729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.837811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.837838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.837923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.837951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.838052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.838079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.838182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.838221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.838314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.838343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.838438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.838465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.838548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.838574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.838659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.838687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.838809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.838835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.838973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.838999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.839096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.839123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.839241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.839268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.839408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.839434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.839529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.839556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.839686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.839712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.839828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.839856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.839954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.839981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.840085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.840111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.840223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.840250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.840356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.840382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.840467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.840500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.840588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.840615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.840704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.840730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.840825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.840852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.840944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.840970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.841101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.841130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.841220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.841246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.841359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.841384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.449 [2024-10-17 16:55:08.841521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.449 [2024-10-17 16:55:08.841546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.449 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.841643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.841682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.841783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.841811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.841905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.841931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.842026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.842053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.842140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.842166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.842285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.842312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.842453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.842479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.842568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.842593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.842690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.842715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.842800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.842827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.842919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.842945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.843044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.843071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.843197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.843236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.843373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.843411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.843558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.843585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.843669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.843695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.843810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.843836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.843949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.843974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.844075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.844107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.844196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.844222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.844310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.844335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.844451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.844477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.844554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.844579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.844704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.844730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.844842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.844868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.844967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.844996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.845107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.845146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.845298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.845325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.845413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.845439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.845550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.845576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.845690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.845716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.845805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.845831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.845962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.846008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.846113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.846152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.846274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.846302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.846415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.846442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.846559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.846586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.846667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.846693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.846807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.846835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.846945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.846971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.450 [2024-10-17 16:55:08.847076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.450 [2024-10-17 16:55:08.847116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.450 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.847232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.847259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.847339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.847365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.847487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.847514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.847631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.847657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.847790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.847829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.847958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.847985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.848111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.848137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.848217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.848243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.848356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.848382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.848495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.848522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.848606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.848632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.848771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.848797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.848897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.848937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.849037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.849067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.849180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.849206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.849315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.849341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.849356] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:26:55.451 [2024-10-17 16:55:08.849449] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.451 [2024-10-17 16:55:08.849469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.849500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.849616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.849642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.849797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.849823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.849915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.849954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.850064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.850092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.850182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.850210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.850303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.850330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.850439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.850464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.850578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.850604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.850695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.850724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.850848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.850878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.850984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.851017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.851111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.851139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.851256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.851283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.851392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.851418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.851499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.851525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.851661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.851688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.851820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.851860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.851983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.852017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.852105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.852131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.852214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.852240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.852349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.451 [2024-10-17 16:55:08.852376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.451 qpair failed and we were unable to recover it. 00:26:55.451 [2024-10-17 16:55:08.852454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.852479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.852570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.852596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.852691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.852721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.852805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.852832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.852959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.852988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.853087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.853115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.853250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.853276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.853390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.853416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.853506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.853532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.853674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.853702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.853786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.853814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.853910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.853938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.854054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.854082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.854199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.854226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.854311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.854338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.854423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.854449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.854540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.854569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.854691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.854717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.854806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.854838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.854951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.854977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.855081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.855109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.855226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.855253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.855353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.855379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.855465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.855491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.855604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.855630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.855721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.855746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.855854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.855880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.855959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.855988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.856099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.856138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.856237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.856264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.856353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.856379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.856462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.856488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.856604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.856630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.856741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.856767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.856854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.856879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.856995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.452 [2024-10-17 16:55:08.857029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.452 qpair failed and we were unable to recover it. 00:26:55.452 [2024-10-17 16:55:08.857125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.857151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.857243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.857270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.857375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.857402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.857544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.857571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.857663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.857692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.857784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.857812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.857899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.857924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.858039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.858065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.858158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.858184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.858266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.858296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.858386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.858411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.858518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.858544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.858637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.858663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.858779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.858804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.858922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.858947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.859063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.859089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.859204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.859229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.859316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.859341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.859423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.859449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.859561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.859586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.859671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.859697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.859798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.859838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.859931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.859970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.860101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.860130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.860238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.860265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.860380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.860407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.860500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.860528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.860655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.860683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.860786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.860815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.860915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.860955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.861067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.861094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.861198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.861225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.861340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.861366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.861489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.861516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.861611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.861638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.861734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.861760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.861847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.861882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.861977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.862011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.862103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.862131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.862216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.862244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.862360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.453 [2024-10-17 16:55:08.862387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.453 qpair failed and we were unable to recover it. 00:26:55.453 [2024-10-17 16:55:08.862496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.862522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.862610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.862636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.862726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.862754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.862844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.862872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.862988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.863020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.863154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.863180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.863320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.863345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.863435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.863460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.863550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.863578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.863699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.863726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.863813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.863840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.863947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.863974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.864063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.864092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.864209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.864235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.864344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.864369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.864446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.864472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.864557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.864583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.864676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.864703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.864811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.864837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.864923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.864948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.865049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.865074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.865186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.865211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.865292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.865321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.865461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.865487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.865579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.865606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.865683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.865709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.865821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.865847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.865934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.865959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.866085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.866113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.866200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.866228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.866316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.866343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.866464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.866490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.866584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.866610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.866694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.866720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.866838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.866865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.866984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.867026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.867136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.867161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.867247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.867274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.867375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.867400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.867482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.867509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.867649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.454 [2024-10-17 16:55:08.867674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.454 qpair failed and we were unable to recover it. 00:26:55.454 [2024-10-17 16:55:08.867785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.867811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.867903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.867928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.868014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.868040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.868123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.868149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.868238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.868264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.868352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.868377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.868491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.868517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.868631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.868656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.868769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.868797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.868901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.868940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.869073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.869103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.869196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.869224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.869343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.869369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.869485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.869511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.869599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.869624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.869709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.869736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.869848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.869873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.869963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.869990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.870091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.870118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.870213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.870239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.870350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.870376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.870467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.870500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.870622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.870648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.870779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.870818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.870916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.870942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.871093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.871120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.871202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.871227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.871342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.871367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.871481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.871506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.871618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.871644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.871737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.871766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.871863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.871903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.872009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.872038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.872160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.872187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.872277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.872304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.872404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.872430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.872539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.872566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.872678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.872703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.872789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.872815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.872901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.872927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.873066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.455 [2024-10-17 16:55:08.873092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.455 qpair failed and we were unable to recover it. 00:26:55.455 [2024-10-17 16:55:08.873203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.873229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.873307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.873334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.873446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.873472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.873583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.873608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.873734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.873762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.873887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.873915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.874041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.874081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.874207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.874239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.874329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.874356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.874478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.874503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.874582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.874609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.874726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.874752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.874866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.874894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.874977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.875019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.875122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.875161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.875277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.875305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.875442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.875468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.875559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.875584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.875701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.875727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.875840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.875865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.875954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.875981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.876091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.876119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.876253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.876293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.876408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.876435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.876531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.876559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.876651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.876678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.876763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.876789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.876879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.876907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.877025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.877052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.877141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.877168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.877264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.877290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.877381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.877407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.877493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.877519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.877667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.877693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.877808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.456 [2024-10-17 16:55:08.877838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.456 qpair failed and we were unable to recover it. 00:26:55.456 [2024-10-17 16:55:08.877948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.877974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.878065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.878090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.878207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.878232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.878318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.878344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.878452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.878477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.878584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.878609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.878691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.878716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.878809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.878837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.878934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.878973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.879099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.879129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.879227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.879254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.879359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.879385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.879467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.879494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.879597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.879624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.879746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.879772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.879860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.879885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.879969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.879995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.880101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.880126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.880242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.880267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.880351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.880377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.880463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.880488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.880619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.880645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.880757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.880783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.880868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.880893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.880974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.881008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.881126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.881152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.881264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.881294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.881409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.881435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.881528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.881554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.881648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.881687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.881810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.881838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.881950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.881976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.882095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.882122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.882242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.882268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.882384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.882409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.882550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.882576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.882667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.882692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.882799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.882839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.882937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.882966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.883068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.883096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.883195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.457 [2024-10-17 16:55:08.883222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.457 qpair failed and we were unable to recover it. 00:26:55.457 [2024-10-17 16:55:08.883335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.883361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.883445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.883471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.883553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.883580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.883663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.883688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.883827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.883852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.883943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.883969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.884072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.884098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.884211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.884236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.884321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.884346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.884458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.884483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.884608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.884647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.884750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.884778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.884867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.884898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.885042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.885070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.885187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.885213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.885325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.885351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.885437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.885464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.885577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.885604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.885700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.885726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.885820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.885846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.885939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.885964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.886089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.886115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.886204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.886229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.886318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.886343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.886476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.886501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.886616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.886644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.886779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.886819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.886913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.886941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.887074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.887102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.887226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.887252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.887344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.887371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.887489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.887515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.887640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.887668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.887759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.887786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.887879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.887906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.888030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.888056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.888140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.888165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.888252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.888278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.888395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.888420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.888542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.888581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.458 [2024-10-17 16:55:08.888676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.458 [2024-10-17 16:55:08.888703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.458 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.888821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.888848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.888936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.888961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.889068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.889094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.889183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.889209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.889333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.889359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.889469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.889496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.889588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.889614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.889699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.889726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.889816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.889842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.889936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.889964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.890068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.890095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.890198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.890229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.890352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.890379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.890462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.890489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.890606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.890633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.890750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.890778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.890859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.890887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.891034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.891073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.891175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.891204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.891328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.891354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.891432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.891459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.891552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.891579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.891690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.891716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.891799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.891825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.891910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.891936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.892044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.892071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.892185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.892210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.892331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.892356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.892463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.892489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.892560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.892585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.892673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.892701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.892796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.892824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.892929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.892968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.893079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.893106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.893186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.893211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.893306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.893332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.893439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.893465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.893559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.893584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.893688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.893718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.893836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.893861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.459 [2024-10-17 16:55:08.893943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.459 [2024-10-17 16:55:08.893970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.459 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.894074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.894103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.894194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.894221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.894316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.894342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.894455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.894481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.894573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.894600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.894720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.894746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.894837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.894864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.894999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.895044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.895146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.895173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.895283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.895309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.895424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.895450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.895570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.895596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.895721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.895747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.895838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.895864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.895982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.896017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.896155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.896181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.896277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.896302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.896391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.896417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.896506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.896532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.896652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.896677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.896790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.896816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.896911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.896937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.897073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.897112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.897209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.897238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.897335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.897363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.897455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.897481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.897572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.897597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.897682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.897707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.897791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.897817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.897928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.897953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.898045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.898071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.898181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.898206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.898297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.898322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.898403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.898428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.898568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.898593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.460 [2024-10-17 16:55:08.898678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.460 [2024-10-17 16:55:08.898708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.460 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.898801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.898829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.898947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.898974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.899080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.899106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.899199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.899226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.899319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.899345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.899429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.899455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.899548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.899576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.899676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.899705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.899821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.899848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.899959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.899985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.900116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.900143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.900231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.900258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.900368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.900394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.900482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.900509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.900592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.900618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.900716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.900743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.900855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.900881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.900998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.901030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.901141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.901167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.901243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.901268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.901352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.901378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.901463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.901488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.901626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.901651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.901736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.901761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.901877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.901904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.901990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.902024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.902142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.902169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.902252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.902278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.902390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.902422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.902507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.902535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.902655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.902682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.902821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.902848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.902935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.902960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.903056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.903082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.903163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.903189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.903273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.903299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.903402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.903429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.903541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.903568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.903681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.903706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.903828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.903855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.461 [2024-10-17 16:55:08.903970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.461 [2024-10-17 16:55:08.903997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.461 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.904105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.904132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.904223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.904249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.904367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.904394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.904485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.904512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.904628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.904654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.904742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.904769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.904857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.904882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.904967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.904992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.905115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.905142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.905221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.905247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.905365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.905391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.905477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.905504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.905600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.905626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.905745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.905773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.905855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.905883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.905967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.905996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.906118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.906146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.906235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.906263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.906348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.906374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.906456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.906483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.906615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.906642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.906721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.906747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.906833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.906860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.906994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.907043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.907163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.907191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.907288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.907315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.907407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.907433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.907520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.907552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.907674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.907702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.907818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.907846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.907941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.907970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.908080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.908107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.908219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.908245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.908329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.908356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.908444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.908472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.908562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.908588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.908686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.908713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.908798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.908825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.908915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.908942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.909029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.909055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.462 qpair failed and we were unable to recover it. 00:26:55.462 [2024-10-17 16:55:08.909141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.462 [2024-10-17 16:55:08.909167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.909284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.909310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.909424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.909450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.909536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.909562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.909646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.909672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.909764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.909790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.909875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.909900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.909986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.910019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.910107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.910132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.910215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.910240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.910372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.910398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.910485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.910511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.910594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.910620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.910709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.910734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.910839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.910885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.910976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.911014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.911136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.911163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.911253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.911278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.911391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.911416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.911508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.911537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.911625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.911651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.911780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.911819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.911917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.911944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.912070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.912098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.912188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.912215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.912297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.912323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.912420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.912446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.912554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.912580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.912710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.912737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.912854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.912882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.912970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.912997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.913091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.913118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.913255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.913291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.913401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.913427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.913518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.913544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.913671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.913711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.913814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.913842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.913927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.913955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.914045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.914073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.914175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.914201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.914279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.463 [2024-10-17 16:55:08.914305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.463 qpair failed and we were unable to recover it. 00:26:55.463 [2024-10-17 16:55:08.914422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.914448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.914529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.914555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.914642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.914670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.914792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.914820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.914929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.914955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.915058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.915086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.915181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.915208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.915309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.915348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.915446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.915472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.915567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.915592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.915704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.915730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.915850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.915877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.915965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.915991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.916089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.916120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.916206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.916233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.916343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.916368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.916455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.916480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.916562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.916588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.916689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.916728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.916817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.916844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.916946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.916985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.917123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.917151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.917236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.917264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.917376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.917403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.917520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.917546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.917657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.917683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.917772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.917800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.917926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.917953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.918048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.918074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.918159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.918185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.918269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.918295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.918367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.464 [2024-10-17 16:55:08.918400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.918425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.918514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.918540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.918625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.918652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.918779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.918818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.918932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.918960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.919069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.919097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.464 [2024-10-17 16:55:08.919188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.464 [2024-10-17 16:55:08.919216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.464 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.919308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.919335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.919427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.919453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.919579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.919607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.919699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.919726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.919808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.919834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.919918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.919945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.920036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.920066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.920172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.920198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.920289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.920315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.920425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.920451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.920536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.920564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.920656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.920683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.920804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.920831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.920917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.920945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.921054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.921081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.921184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.921212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.921314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.921339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.921430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.921457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.921541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.921568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.921665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.921690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.921806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.921831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.921948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.921973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.922077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.922105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.922227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.922267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.922352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.922380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.922499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.922525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.922616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.922643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.922735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.922763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.922897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.922924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.923028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.923067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.923165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.923194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.923310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.923337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.923486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.923512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.923635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.923662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.923760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.923788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.923881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.923909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.923997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.924048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.924142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.924168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.924271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.465 [2024-10-17 16:55:08.924298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.465 qpair failed and we were unable to recover it. 00:26:55.465 [2024-10-17 16:55:08.924387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.924414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.924497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.924524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.924611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.924639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.924748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.924788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.924912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.924941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.925076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.925102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.925188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.925214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.925301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.925327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.925416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.925442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.925522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.925548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.925654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.925680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.925773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.925798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.925885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.925913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.926015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.926045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.926138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.926165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.926253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.926279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.926408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.926440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.926539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.926568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.926662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.926690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.926781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.926808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.926894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.926921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.927021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.927047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.927136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.927164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.927284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.927310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.927454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.927483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.927569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.927596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.927703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.927736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.927822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.927850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.927946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.927986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.928103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.928131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.928261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.928289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.928386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.928412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.928498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.928525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.928641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.928667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.928810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.928836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.928952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.928978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.929076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.929103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.929199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.929225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.929309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.929336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.929442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.929469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.466 [2024-10-17 16:55:08.929570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.466 [2024-10-17 16:55:08.929598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.466 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.929689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.929716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.929866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.929893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.929987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.930023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.930103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.930129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.930209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.930235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.930326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.930361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.930479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.930505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.930605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.930633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.930727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.930756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.930880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.930907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.930993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.931028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.931163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.931190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.931285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.931312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.931395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.931422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.931534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.931562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.931662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.931712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.931840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.931868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.931957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.931993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.932098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.932124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.932219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.932245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.932342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.932376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.932466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.932493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.932586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.932612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.932697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.932724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.932822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.932851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.932941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.932971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.933073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.933100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.933188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.933214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.933311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.933337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.933433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.933461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.933566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.933594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.933737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.933764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.933851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.933885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.933972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.934007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.934092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.934118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.934211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.934237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.934343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.934372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.934474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.934501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.934605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.934633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.934777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.934804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.467 [2024-10-17 16:55:08.934906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.467 [2024-10-17 16:55:08.934946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.467 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.935049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.935079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.935181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.935215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.935343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.935369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.935487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.935512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.935615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.935642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.935726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.935753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.935852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.935881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.935978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.936013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.936129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.936156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.936277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.936304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.936419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.936445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.936536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.936563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.936727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.936753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.936850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.936889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.936989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.937025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.937129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.937157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.937250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.937276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.937360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.937386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.937480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.937511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.937625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.937652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.937739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.937767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.937901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.937940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.938044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.938071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.938187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.938213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.938302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.938328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.938415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.938441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.938555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.938583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.938673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.938701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.938807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.938846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.938941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.938969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.939069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.939096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.939216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.939242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.939392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.939418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.939520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.939545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.939656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.939682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.939774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.939800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.939889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.939928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.940058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.940087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.940170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.940196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.940313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.940339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.468 [2024-10-17 16:55:08.940426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.468 [2024-10-17 16:55:08.940452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.468 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.940540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.940571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.940658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.940683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.940769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.940795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.940927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.940955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.941088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.941116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.941209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.941236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.941319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.941345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.941438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.941464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.941546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.941572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.941661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.941687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.941776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.941802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.941903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.941946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.942079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.942108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.942203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.942231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.942327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.942354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.942446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.942472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.942556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.942582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.942668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.942695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.942807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.942842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.942947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.942972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.943071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.943098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.943207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.943233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.943326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.943352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.943435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.943461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.943539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.943565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.943681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.943718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.943804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.943831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.943964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.944017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.944113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.944139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.944223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.944249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.944369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.944395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.944512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.944537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.944618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.944644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.944756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.944781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.944869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.944895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.469 [2024-10-17 16:55:08.944996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.469 [2024-10-17 16:55:08.945031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.469 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.945133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.945161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.945256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.945287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.945399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.945426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.945541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.945567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.945647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.945674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.945776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.945802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.945886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.945913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.946013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.946042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.946124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.946151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.946241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.946267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.946395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.946421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.946507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.946533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.946655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.946681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.946778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.946805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.946896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.946923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.947016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.947043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.947153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.947178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.947270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.947297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.947385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.947416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.947542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.947570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.947675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.947714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.947807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.947835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.947948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.947975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.948099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.948125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.948219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.948246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.948330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.948357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.948469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.948495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.948609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.948635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.948719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.948745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.948848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.948886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.949052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.949091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.949218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.949245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.949334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.949360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.949452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.949477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.949556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.949582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.949695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.949722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.949819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.949858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.949961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.949989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.950091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.950117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.950209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.950235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.470 [2024-10-17 16:55:08.950352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.470 [2024-10-17 16:55:08.950384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.470 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.950474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.950500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.950615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.950643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.950743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.950769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.950867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.950894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.950995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.951038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.951130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.951156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.951277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.951302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.951390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.951416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.951505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.951530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.951624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.951650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.951739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.951767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.951886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.951926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.952065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.952094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.952192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.952218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.952332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.952358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.952484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.952510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.952603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.952630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.952718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.952749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.952838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.952866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.952979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.953016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.953104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.953131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.953212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.953238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.953316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.953341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.953421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.953446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.953536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.953561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.953652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.953676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.953784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.953810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.953915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.953955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.954057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.954085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.954186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.954213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.954334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.954361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.954488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.954515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.954627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.954653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.954756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.954783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.954885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.954926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.955041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.955080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.955175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.955203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.955305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.955331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.955453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.955480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.955566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.955595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.471 [2024-10-17 16:55:08.955707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.471 [2024-10-17 16:55:08.955734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.471 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.955825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.955851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.955972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.955998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.956095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.956122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.956211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.956243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.956333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.956359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.956471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.956497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.956612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.956638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.956755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.956782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.956883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.956911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.957043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.957083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.957189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.957227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.957316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.957343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.957441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.957469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.957582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.957608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.957692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.957718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.957815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.957843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.957941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.957970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.958145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.958172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.958258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.958283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.958373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.958399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.958511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.958536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.958635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.958660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.958771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.958797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.958898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.958928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.959024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.959053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.959167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.959194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.959277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.959304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.959425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.959451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.959544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.959571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.959684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.959710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.959811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.959839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.959936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.959963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.960063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.960100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.960191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.960217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.960307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.960332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.960444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.960470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.960556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.960582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.960667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.960694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.960798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.960823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.472 [2024-10-17 16:55:08.960917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.472 [2024-10-17 16:55:08.960959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.472 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.961068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.961096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.961189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.961214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.961308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.961334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.961414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.961446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.961537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.961563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.961641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.961666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.961771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.961797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.961875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.961901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.962014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.962041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.962130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.962157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.962269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.962297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.962385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.962412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.962513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.962540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.962654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.962681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.962798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.962826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.962940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.962967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.963096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.963123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.963217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.963243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.963354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.963380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.963457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.963483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.963563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.963590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.963691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.963730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.963858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.963887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.964011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.964038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.964177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.964203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.964313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.964338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.964434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.964461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.964583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.964611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.964709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.964739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.964835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.964862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.964950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.964981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.965085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.965112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.965200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.965226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.965308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.965334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.965434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.965462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.965579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.965607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.965692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.965725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.965845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.473 [2024-10-17 16:55:08.965871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.473 qpair failed and we were unable to recover it. 00:26:55.473 [2024-10-17 16:55:08.965959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.965993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.966084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.966110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.966202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.966228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.966337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.966362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.966450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.966476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.966562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.966589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.966690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.966718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.966816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.966843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.966967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.966993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.967094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.967121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.967239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.967266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.967382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.967408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.967510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.967535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.967621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.967646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.967769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.967796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.967891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.967919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.968011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.968040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.968156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.968182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.968271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.968298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.968395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.968421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.968560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.968586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.968674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.968702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.968793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.968820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.968913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.968940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.969035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.969062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.969171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.969197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.969289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.969314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.969456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.969481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.969599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.969624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.969740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.969767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.969855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.969883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.970011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.970038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.970150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.970181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.970282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.970309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.970428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.970454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.970539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.970566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.970684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.970710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.970784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.970809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.970893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.474 [2024-10-17 16:55:08.970919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.474 qpair failed and we were unable to recover it. 00:26:55.474 [2024-10-17 16:55:08.970997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.971039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.971133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.971159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.971250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.971276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.971363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.971390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.971475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.971501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.971619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.971645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.971730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.971756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.971887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.971926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.972030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.972057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.972172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.972198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.972290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.972316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.972409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.972436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.972524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.972551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.972662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.972688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.972807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.972838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.972930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.972955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.973045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.973072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.973184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.973209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.973291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.973316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.973427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.973452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.973536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.973568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.973689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.973715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.973826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.973852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.973934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.973962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.974056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.974082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.974175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.974202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.974317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.974342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.974442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.974468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.974550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.974576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.974665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.974690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.974828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.974854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.974963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.975016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.975121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.975148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.975243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.975282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.975381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.975407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.975499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.975525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.975639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.975666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.975755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.975780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.975915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.975954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.976056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.976084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.976212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.976242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.976362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.976390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.976480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.976507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.475 [2024-10-17 16:55:08.976598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.475 [2024-10-17 16:55:08.976624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.475 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.976740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.976767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.976874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.976914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.977014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.977042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.977153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.977192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.977288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.977316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.977421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.977446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.977527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.977553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.977669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.977696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.977777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.977802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.977881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.977905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.978011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.978037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.978134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.978172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.978268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.978296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.978384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.978409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.978500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.978526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.978623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.978649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.978734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.978765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.978849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.978874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.978966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.978995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.979107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.979133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.979225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.979251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.979356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.979382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.979482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.979522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.979606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.979633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.979728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.979756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.979834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.979860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.979951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.979977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.980069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.980095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.980172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.980197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.980277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.980303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.980393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.980418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.980501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.980527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.980614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.980639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.980719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.980744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.980824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.980855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.980918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.476 [2024-10-17 16:55:08.980951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.476 [2024-10-17 16:55:08.980966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.476 [2024-10-17 16:55:08.980978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.476 [2024-10-17 16:55:08.980977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.980988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.476 [2024-10-17 16:55:08.981013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.981159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.981185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.981271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.981299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.981390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.981418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.981541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.981569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.981685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.476 [2024-10-17 16:55:08.981712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.476 qpair failed and we were unable to recover it. 00:26:55.476 [2024-10-17 16:55:08.981820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.981860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.981986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.982023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.982107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.982133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.982215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.982240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.982339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.982364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.982477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.982502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.982592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.982617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.982587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:55.477 [2024-10-17 16:55:08.982613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:55.477 [2024-10-17 16:55:08.982702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.982727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 [2024-10-17 16:55:08.982660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.982664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:55.477 [2024-10-17 16:55:08.982850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.982877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.982966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.982995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.983107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.983133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.983222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.983250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.983335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.983368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.983472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.983498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.983593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.983621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.983710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.983738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.983845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.983884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.983984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.984019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.984159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.984186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.984283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.984309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.984426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.984452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.984538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.984564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.984651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.984677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.984760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.984787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.984876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.984901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.984987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.985021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.985113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.985139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.985232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.985256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.985347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.985372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.985452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.985477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.985559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.985584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.985670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.985697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.985786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.985814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.985916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.985945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.986041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.986069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.986149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.986175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.986255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.477 [2024-10-17 16:55:08.986281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.477 qpair failed and we were unable to recover it. 00:26:55.477 [2024-10-17 16:55:08.986392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.986418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.986509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.986535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.986651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.986678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.986792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.986818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.986941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.986969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.987057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.987083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.987169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.987195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.987288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.987314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.987395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.987420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.987500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.987524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.987608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.987632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.987746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.987774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.987862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.987888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.987965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.987990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.988096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.988122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.988219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.988258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.988370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.988398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.988502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.988528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.988646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.988672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.988760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.988786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.988869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.988894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.988975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.989008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.989122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.989147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.989228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.989254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.989332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.989357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.989452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.989478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.989556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.989581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.989669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.989695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.989770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.989795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.989886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.478 [2024-10-17 16:55:08.989912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.478 qpair failed and we were unable to recover it. 00:26:55.478 [2024-10-17 16:55:08.990009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.990037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.990157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.990188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.990278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.990307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.990391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.990417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.990506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.990533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.990620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.990646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.990758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.990785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.990868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.990893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.990987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.991023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.991147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.991173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.991257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.991283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.991373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.991400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.991513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.991545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.991639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.991666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.991751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.991778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.991859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.991884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.991967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.991992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.992086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.992112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.992226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.992251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.992324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.992350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.992429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.992454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.992535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.992563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.992641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.992668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.992762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.992788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.992919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.992946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.993062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.993102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.993232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.993259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.993344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.993371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.993455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.993481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.993569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.993595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.993704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.993729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.993816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.993842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.993918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.993944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.994031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.994058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.994140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.994168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.994257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.994287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.994378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.994404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.994489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.994516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.994598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.994624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.994713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.994746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.479 [2024-10-17 16:55:08.994888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.479 [2024-10-17 16:55:08.994915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.479 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.995012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.995039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.995122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.995148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.995231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.995257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.995365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.995390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.995480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.995508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.995592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.995619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.995711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.995737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.995827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.995853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.995931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.995957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.996069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.996096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.996175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.996200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.996284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.996310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.996439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.996464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.996549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.996574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.996667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.996693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.996773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.996799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.996916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.996944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.997042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.997069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.997155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.997182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.997295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.997329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.997414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.997440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.997521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.997548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.997628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.997655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.997739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.997767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.997862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.997902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.997992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.998025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.998109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.998135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.998221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.998247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.998355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.998381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.998484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.998510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.998590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.998616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.998706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.998737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.998820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.998847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.998933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.998960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.999052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.999079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.999164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.999190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.999307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.999332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.999420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.999445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.999529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.999561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.999658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.480 [2024-10-17 16:55:08.999687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.480 qpair failed and we were unable to recover it. 00:26:55.480 [2024-10-17 16:55:08.999811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:08.999837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:08.999920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:08.999946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.000034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.000062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.000160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.000187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.000319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.000345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.000458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.000486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.000587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.000614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.000698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.000725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.000810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.000838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.000971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.001024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.001123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.001151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.001236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.001262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.001383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.001409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.001502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.001527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.001617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.001643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.001726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.001752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.001867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.001896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.001979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.002013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.002098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.002124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.002210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.002236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.002321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.002348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.002480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.002507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.002593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.002620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.002705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.002733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.002828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.002868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.002958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.002991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.003086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.003112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.003193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.003218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.003305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.003331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.003442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.003467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.003557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.003584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.003668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.003693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.003789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.003818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.003914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.003942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.004046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.004076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.004180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.004208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.004301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.481 [2024-10-17 16:55:09.004328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.481 qpair failed and we were unable to recover it. 00:26:55.481 [2024-10-17 16:55:09.004413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.004439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.004525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.004552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.004653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.004679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.004760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.004785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.004886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.004911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.005005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.005032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.005128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.005153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.005244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.005270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.005358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.005384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.005474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.005502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.005598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.005624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.005713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.005739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.005817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.005843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.005924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.005949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.006043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.006068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.006153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.006179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.006292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.006318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.006396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.006421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.006503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.006529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.006614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.006640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.006749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.006774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.006872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.006898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.006991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.007026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.007118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.007145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.007228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.007255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.007348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.007374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.007461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.007489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.007583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.007609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.007697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.007724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.007820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.007845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.007930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.007956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.008045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.008071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.482 [2024-10-17 16:55:09.008153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.482 [2024-10-17 16:55:09.008178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.482 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.008278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.008302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.008387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.008414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.008524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.008549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.008646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.008674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.008762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.008790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.008898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.008938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.009068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.009097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.009189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.009216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.009309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.009337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.009457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.009484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.009567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.009594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.009687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.009713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.009825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.009851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.009929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.009955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.010047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.010073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.010154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.010179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.010267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.010293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.010385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.010411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.010499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.010526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.010613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.010641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.010734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.010762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.010845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.010871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.010984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.011033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.011120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.011147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.011235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.011261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.011341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.011367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.011448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.011477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.011563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.011589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.011675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.011701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.011810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.011836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.011919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.011945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.012024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.012050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.012131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.012156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.012242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.012268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.012363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.012389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.012479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.012504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.012597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.012626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.012731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.012770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.012859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.012886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.012971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.483 [2024-10-17 16:55:09.012997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.483 qpair failed and we were unable to recover it. 00:26:55.483 [2024-10-17 16:55:09.013105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.013132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.013225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.013251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.013348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.013373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.013468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.013494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.013575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.013601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.013694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.013721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.013808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.013834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.013930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.013957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.014050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.014076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.014170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.014198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.014302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.014327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.014417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.014443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.014555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.014581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.014664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.014689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.014771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.014798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.014897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.014925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.015037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.015076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.015172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.015199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.015294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.015321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.015433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.015459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.015540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.015566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.015649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.015675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.015760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.015789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.015884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.015910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.016006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.016034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.016123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.016150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.016244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.016271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.016365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.016392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.016481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.016508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.016604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.016632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.016713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.016739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.016832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.016859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.016941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.016967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.017058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.017087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.017177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.017204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.017283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.017309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.017399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.017425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.017508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.017534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.017629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.017657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.017768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.017793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.484 [2024-10-17 16:55:09.017879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.484 [2024-10-17 16:55:09.017904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.484 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.017986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.018019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.018115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.018141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.018219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.018244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.018333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.018360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.018455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.018484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.018572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.018598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.018676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.018702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.018790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.018816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.018897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.018931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.019028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.019055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.019138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.019163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.019242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.019267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.019350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.019376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.019489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.019514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.019624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.019650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.019731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.019758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.019850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.019878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.019974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.020010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.020103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.020128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.020213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.020239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.020326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.020352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.020467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.020494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.020586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.020613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.020696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.020722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.020807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.020834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.020920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.020945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.021048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.021074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.021160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.021186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.021269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.021295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.021391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.021417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.021492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.021518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.021603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.021629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.021707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.021733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.021820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.021848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.021943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.021971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.022110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.022144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.022244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.022270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.022351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.022377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.022462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.022489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.022580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.022607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.485 qpair failed and we were unable to recover it. 00:26:55.485 [2024-10-17 16:55:09.022693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.485 [2024-10-17 16:55:09.022719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.022812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.022852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.022946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.022974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.023099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.023125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.023209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.023235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.023312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.023338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.023453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.023480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.023568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.023596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.023682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.023711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.023829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.023868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.023956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.023984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.024076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.024103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.024192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.024218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.024302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.024329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.024414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.024440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.024533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.024561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.024641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.024667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.024770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.024798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.024908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.024934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.025035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.025062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.025149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.025175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.025263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.025289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.025381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.025408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.025498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.025524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.025635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.025661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.025743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.025769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.025871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.025910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.026011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.026039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.026123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.026150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.026241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.026267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.026358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.026387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.026470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.026497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.026575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.026601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.026688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.486 [2024-10-17 16:55:09.026714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.486 qpair failed and we were unable to recover it. 00:26:55.486 [2024-10-17 16:55:09.026805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.026831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.026921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.026953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.027058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.027085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.027170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.027197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.027283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.027309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.027401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.027427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.027513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.027540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.027621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.027649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.027747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.027773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.027888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.027914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.027998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.028044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.028122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.028149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.028249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.028288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.028387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.028414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.028491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.028517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.028633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.028658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.028745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.028773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.028910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.028948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.029053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.029082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.029162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.029188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.029270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.029302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.029381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.029407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.029486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.029511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.029604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.029632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.029722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.029751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.029831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.029858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.029948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.029975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.030066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.030092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.030183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.030211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.030305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.030331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.030411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.030437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.030531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.030558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.030674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.487 [2024-10-17 16:55:09.030700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.487 qpair failed and we were unable to recover it. 00:26:55.487 [2024-10-17 16:55:09.030780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.030805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.030895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.030921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.031034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.031061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.031144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.031170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.031258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.031285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.031367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.031392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.031474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.031499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.031589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.031614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.031690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.031715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.031831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.031856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.031952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.031980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.032072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.032098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.032175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.032201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.032281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.032306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.032387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.032413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.032497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.032523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.032610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.032636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.032712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.032738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.032826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.032852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.032939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.032965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.033059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.033088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.033169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.033195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.033280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.033308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.033403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.033429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.033517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.033543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.033634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.033660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.033752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.033779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.033867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.033893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.033983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.034022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.034116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.034143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.034222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.034247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.034337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.034363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.034437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.034462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.034547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.034572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.034663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.034689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.034789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.034834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.034922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.034949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.035032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.035059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.035170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.035196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.035282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.035308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.035403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.488 [2024-10-17 16:55:09.035429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.488 qpair failed and we were unable to recover it. 00:26:55.488 [2024-10-17 16:55:09.035519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.035545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.035626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.035652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.035742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.035768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.035858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.035884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0200000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 A controller has encountered a failure and is being reset. 00:26:55.489 [2024-10-17 16:55:09.036013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.036053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.036149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.036177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.036258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.036284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.036408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.036435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.036538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.036565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.036647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.036673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.036752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.036779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.036864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.036890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.036967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.036996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.037088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.037113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.037197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.037223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.037310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.037335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.037424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.037450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f4000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.037551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.037590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.037718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.037758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.037862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.037891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.037978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.038021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.038119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.038147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.038246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.038273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.038389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.038416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.038505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.038532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.038613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.038640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01f8000b90 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.038765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.038794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.038880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.038906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.038986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.039021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.039106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.039132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.039236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.039262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.039352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.039377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.039474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.039503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.039585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.039610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.039697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.039728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.039807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.039833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.039917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.039943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.040035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.040062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.040142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.040168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.040246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.040271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.040351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.489 [2024-10-17 16:55:09.040377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.489 qpair failed and we were unable to recover it. 00:26:55.489 [2024-10-17 16:55:09.040451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-10-17 16:55:09.040477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-10-17 16:55:09.040551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-10-17 16:55:09.040576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-10-17 16:55:09.040687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-10-17 16:55:09.040712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-10-17 16:55:09.040801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-10-17 16:55:09.040826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-10-17 16:55:09.040918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-10-17 16:55:09.040943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-10-17 16:55:09.041053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-10-17 16:55:09.041079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-10-17 16:55:09.041167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-10-17 16:55:09.041193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b24060 with addr=10.0.0.2, port=4420 00:26:55.490 qpair failed and we were unable to recover it. 00:26:55.490 [2024-10-17 16:55:09.041349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.490 [2024-10-17 16:55:09.041403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b31ff0 with addr=10.0.0.2, port=4420 00:26:55.490 [2024-10-17 16:55:09.041426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31ff0 is same with the state(6) to be set 00:26:55.490 [2024-10-17 16:55:09.041452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b31ff0 (9): Bad file descriptor 00:26:55.490 [2024-10-17 16:55:09.041471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.490 [2024-10-17 16:55:09.041485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.490 [2024-10-17 16:55:09.041502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.490 Unable to reset the controller. 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.750 Malloc0 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.750 [2024-10-17 16:55:09.176597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.750 [2024-10-17 16:55:09.204870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.750 16:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2472918 00:26:56.695 Controller properly reset. 00:27:01.967 Initializing NVMe Controllers 00:27:01.967 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:01.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:01.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:01.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:01.967 Initialization complete. Launching workers. 00:27:01.967 Starting thread on core 1 00:27:01.967 Starting thread on core 2 00:27:01.967 Starting thread on core 3 00:27:01.967 Starting thread on core 0 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:01.967 00:27:01.967 real 0m10.689s 00:27:01.967 user 0m34.195s 00:27:01.967 sys 0m7.105s 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.967 ************************************ 00:27:01.967 END TEST nvmf_target_disconnect_tc2 00:27:01.967 ************************************ 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:01.967 rmmod nvme_tcp 00:27:01.967 rmmod nvme_fabrics 00:27:01.967 rmmod nvme_keyring 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 2473449 ']' 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 2473449 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2473449 ']' 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2473449 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2473449 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2473449' 00:27:01.967 killing process with pid 2473449 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2473449 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2473449 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.967 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.968 16:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.905 16:55:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:03.905 00:27:03.905 real 0m15.617s 00:27:03.905 user 1m0.196s 00:27:03.905 sys 0m9.507s 00:27:03.905 16:55:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.905 16:55:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:03.905 ************************************ 00:27:03.905 END TEST nvmf_target_disconnect 00:27:03.905 ************************************ 00:27:03.905 16:55:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:03.905 00:27:03.905 real 5m5.334s 00:27:03.905 user 11m3.289s 00:27:03.905 sys 1m15.124s 00:27:03.905 16:55:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.905 16:55:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.905 ************************************ 00:27:03.905 END TEST nvmf_host 00:27:03.905 ************************************ 00:27:03.905 16:55:17 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:03.905 16:55:17 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:03.905 16:55:17 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:03.905 16:55:17 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:03.905 16:55:17 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:03.905 16:55:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.905 ************************************ 00:27:03.905 START TEST nvmf_target_core_interrupt_mode 00:27:03.905 ************************************ 00:27:03.905 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:04.164 * Looking for test storage... 00:27:04.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:04.164 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:04.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.165 --rc genhtml_branch_coverage=1 00:27:04.165 --rc genhtml_function_coverage=1 00:27:04.165 --rc genhtml_legend=1 00:27:04.165 --rc geninfo_all_blocks=1 00:27:04.165 --rc geninfo_unexecuted_blocks=1 00:27:04.165 00:27:04.165 ' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:04.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.165 --rc genhtml_branch_coverage=1 00:27:04.165 --rc genhtml_function_coverage=1 00:27:04.165 --rc genhtml_legend=1 00:27:04.165 --rc geninfo_all_blocks=1 00:27:04.165 --rc geninfo_unexecuted_blocks=1 00:27:04.165 00:27:04.165 ' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:04.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.165 --rc genhtml_branch_coverage=1 00:27:04.165 --rc genhtml_function_coverage=1 00:27:04.165 --rc genhtml_legend=1 00:27:04.165 --rc geninfo_all_blocks=1 00:27:04.165 --rc geninfo_unexecuted_blocks=1 00:27:04.165 00:27:04.165 ' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:04.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.165 --rc genhtml_branch_coverage=1 00:27:04.165 --rc genhtml_function_coverage=1 00:27:04.165 --rc genhtml_legend=1 00:27:04.165 --rc geninfo_all_blocks=1 00:27:04.165 --rc geninfo_unexecuted_blocks=1 00:27:04.165 00:27:04.165 ' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:04.165 ************************************ 00:27:04.165 START TEST nvmf_abort 00:27:04.165 ************************************ 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:04.165 * Looking for test storage... 00:27:04.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:04.165 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:04.166 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:27:04.166 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:04.424 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:04.424 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:04.424 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:04.424 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:04.424 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.424 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:04.424 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:04.424 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:04.424 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:04.424 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:04.424 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:04.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.425 --rc genhtml_branch_coverage=1 00:27:04.425 --rc genhtml_function_coverage=1 00:27:04.425 --rc genhtml_legend=1 00:27:04.425 --rc geninfo_all_blocks=1 00:27:04.425 --rc geninfo_unexecuted_blocks=1 00:27:04.425 00:27:04.425 ' 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:04.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.425 --rc genhtml_branch_coverage=1 00:27:04.425 --rc genhtml_function_coverage=1 00:27:04.425 --rc genhtml_legend=1 00:27:04.425 --rc geninfo_all_blocks=1 00:27:04.425 --rc geninfo_unexecuted_blocks=1 00:27:04.425 00:27:04.425 ' 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:04.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.425 --rc genhtml_branch_coverage=1 00:27:04.425 --rc genhtml_function_coverage=1 00:27:04.425 --rc genhtml_legend=1 00:27:04.425 --rc geninfo_all_blocks=1 00:27:04.425 --rc geninfo_unexecuted_blocks=1 00:27:04.425 00:27:04.425 ' 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:04.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.425 --rc genhtml_branch_coverage=1 00:27:04.425 --rc genhtml_function_coverage=1 00:27:04.425 --rc genhtml_legend=1 00:27:04.425 --rc geninfo_all_blocks=1 00:27:04.425 --rc geninfo_unexecuted_blocks=1 00:27:04.425 00:27:04.425 ' 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.425 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:04.426 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.331 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.331 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:06.331 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:06.331 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:06.331 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:06.331 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:06.331 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:06.332 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:06.332 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:06.332 Found net devices under 0000:09:00.0: cvl_0_0 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:06.332 Found net devices under 0000:09:00.1: cvl_0_1 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:06.332 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:06.332 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.332 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:06.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:27:06.591 00:27:06.591 --- 10.0.0.2 ping statistics --- 00:27:06.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.591 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:27:06.591 00:27:06.591 --- 10.0.0.1 ping statistics --- 00:27:06.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.591 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2476262 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2476262 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2476262 ']' 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.591 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.592 [2024-10-17 16:55:20.200645] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:06.592 [2024-10-17 16:55:20.201874] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:27:06.592 [2024-10-17 16:55:20.201944] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.592 [2024-10-17 16:55:20.264851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:06.851 [2024-10-17 16:55:20.323283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.851 [2024-10-17 16:55:20.323334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.851 [2024-10-17 16:55:20.323365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.851 [2024-10-17 16:55:20.323377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.851 [2024-10-17 16:55:20.323387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.851 [2024-10-17 16:55:20.324868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.851 [2024-10-17 16:55:20.324938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.851 [2024-10-17 16:55:20.324934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:06.851 [2024-10-17 16:55:20.410740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:06.852 [2024-10-17 16:55:20.411032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:06.852 [2024-10-17 16:55:20.411041] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:06.852 [2024-10-17 16:55:20.411305] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.852 [2024-10-17 16:55:20.465571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.852 Malloc0 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.852 Delay0 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.852 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:07.112 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.112 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:07.112 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.112 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:07.112 [2024-10-17 16:55:20.545830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.112 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.112 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:07.112 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.112 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:07.112 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.112 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:07.112 [2024-10-17 16:55:20.606973] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:09.019 Initializing NVMe Controllers 00:27:09.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:09.019 controller IO queue size 128 less than required 00:27:09.019 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:09.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:09.019 Initialization complete. Launching workers. 00:27:09.019 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27185 00:27:09.019 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27242, failed to submit 66 00:27:09.019 success 27185, unsuccessful 57, failed 0 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:09.019 rmmod nvme_tcp 00:27:09.019 rmmod nvme_fabrics 00:27:09.019 rmmod nvme_keyring 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2476262 ']' 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2476262 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2476262 ']' 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2476262 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.019 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2476262 00:27:09.278 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:09.278 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:09.278 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2476262' 00:27:09.278 killing process with pid 2476262 00:27:09.278 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2476262 00:27:09.278 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2476262 00:27:09.536 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:09.536 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:09.536 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:09.536 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:09.536 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:27:09.536 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:09.536 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:27:09.536 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:09.536 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:09.536 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.536 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.536 16:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.436 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:11.436 00:27:11.436 real 0m7.237s 00:27:11.436 user 0m8.943s 00:27:11.436 sys 0m2.904s 00:27:11.436 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:11.436 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:11.436 ************************************ 00:27:11.436 END TEST nvmf_abort 00:27:11.436 ************************************ 00:27:11.436 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:11.436 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:11.436 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:11.436 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:11.436 ************************************ 00:27:11.436 START TEST nvmf_ns_hotplug_stress 00:27:11.436 ************************************ 00:27:11.436 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:11.695 * Looking for test storage... 00:27:11.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.695 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:11.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.695 --rc genhtml_branch_coverage=1 00:27:11.695 --rc genhtml_function_coverage=1 00:27:11.695 --rc genhtml_legend=1 00:27:11.695 --rc geninfo_all_blocks=1 00:27:11.695 --rc geninfo_unexecuted_blocks=1 00:27:11.695 00:27:11.695 ' 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:11.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.696 --rc genhtml_branch_coverage=1 00:27:11.696 --rc genhtml_function_coverage=1 00:27:11.696 --rc genhtml_legend=1 00:27:11.696 --rc geninfo_all_blocks=1 00:27:11.696 --rc geninfo_unexecuted_blocks=1 00:27:11.696 00:27:11.696 ' 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:11.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.696 --rc genhtml_branch_coverage=1 00:27:11.696 --rc genhtml_function_coverage=1 00:27:11.696 --rc genhtml_legend=1 00:27:11.696 --rc geninfo_all_blocks=1 00:27:11.696 --rc geninfo_unexecuted_blocks=1 00:27:11.696 00:27:11.696 ' 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:11.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.696 --rc genhtml_branch_coverage=1 00:27:11.696 --rc genhtml_function_coverage=1 00:27:11.696 --rc genhtml_legend=1 00:27:11.696 --rc geninfo_all_blocks=1 00:27:11.696 --rc geninfo_unexecuted_blocks=1 00:27:11.696 00:27:11.696 ' 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:11.696 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:13.600 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.600 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:13.600 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:13.600 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:13.600 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:13.600 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:13.600 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:13.600 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:13.601 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:13.601 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:13.601 Found net devices under 0000:09:00.0: cvl_0_0 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:13.601 Found net devices under 0000:09:00.1: cvl_0_1 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.601 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:13.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:27:13.860 00:27:13.860 --- 10.0.0.2 ping statistics --- 00:27:13.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.860 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:27:13.860 00:27:13.860 --- 10.0.0.1 ping statistics --- 00:27:13.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.860 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:13.860 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:13.861 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2478480 00:27:13.861 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:13.861 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2478480 00:27:13.861 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2478480 ']' 00:27:13.861 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.861 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:13.861 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.861 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:13.861 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:13.861 [2024-10-17 16:55:27.424552] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:13.861 [2024-10-17 16:55:27.425655] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:27:13.861 [2024-10-17 16:55:27.425719] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.861 [2024-10-17 16:55:27.493214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:14.118 [2024-10-17 16:55:27.555958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.118 [2024-10-17 16:55:27.556038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.119 [2024-10-17 16:55:27.556055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.119 [2024-10-17 16:55:27.556068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.119 [2024-10-17 16:55:27.556079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.119 [2024-10-17 16:55:27.557688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:14.119 [2024-10-17 16:55:27.557779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:14.119 [2024-10-17 16:55:27.557783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.119 [2024-10-17 16:55:27.650709] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:14.119 [2024-10-17 16:55:27.650935] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:14.119 [2024-10-17 16:55:27.650950] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:14.119 [2024-10-17 16:55:27.651237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:14.119 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:14.119 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:27:14.119 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:14.119 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:14.119 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:14.119 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.119 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:14.119 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:14.377 [2024-10-17 16:55:27.954515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.377 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:14.636 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.895 [2024-10-17 16:55:28.562916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.895 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:15.460 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:15.718 Malloc0 00:27:15.718 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:15.976 Delay0 00:27:15.976 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:16.234 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:16.492 NULL1 00:27:16.492 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:16.750 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2478895 00:27:16.750 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:16.750 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:16.750 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:18.127 Read completed with error (sct=0, sc=11) 00:27:18.127 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:18.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:18.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:18.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:18.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:18.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:18.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:18.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:18.127 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:18.127 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:18.386 true 00:27:18.386 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:18.386 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.320 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:19.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.578 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:19.578 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:19.836 true 00:27:19.836 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:19.836 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:20.094 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.352 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:20.352 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:20.608 true 00:27:20.608 16:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:20.608 16:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:21.544 16:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:21.544 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:21.544 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:21.802 true 00:27:21.802 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:21.802 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.368 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:22.368 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:22.368 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:22.627 true 00:27:22.627 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:22.627 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.886 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:23.453 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:23.453 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:23.453 true 00:27:23.453 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:23.453 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:24.396 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:24.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:24.654 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:24.654 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:24.913 true 00:27:25.171 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:25.172 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:25.429 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:25.686 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:25.687 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:25.944 true 00:27:25.944 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:25.944 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:26.202 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:26.460 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:26.460 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:26.718 true 00:27:26.718 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:26.718 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:27.655 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:27.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:27.912 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:27.912 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:28.170 true 00:27:28.170 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:28.170 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:28.428 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:28.686 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:28.686 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:28.945 true 00:27:28.945 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:28.945 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:29.204 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:29.462 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:29.462 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:29.723 true 00:27:29.723 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:29.723 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:30.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:30.659 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:30.916 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:30.916 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:31.174 true 00:27:31.174 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:31.174 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:31.432 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:31.690 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:31.690 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:31.948 true 00:27:31.948 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:31.948 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.516 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:32.516 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:32.516 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:32.775 true 00:27:32.775 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:32.775 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.712 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:33.971 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:33.971 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:34.229 true 00:27:34.229 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:34.229 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:34.795 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:34.795 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:34.795 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:35.052 true 00:27:35.310 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:35.310 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.568 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.826 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:35.826 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:36.085 true 00:27:36.085 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:36.085 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.022 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:37.280 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:37.280 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:37.538 true 00:27:37.538 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:37.538 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.796 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.055 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:38.055 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:38.312 true 00:27:38.312 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:38.313 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.580 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.838 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:38.838 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:39.097 true 00:27:39.097 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:39.097 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.036 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.294 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:40.294 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:40.594 true 00:27:40.594 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:40.594 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.907 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.164 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:41.164 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:41.421 true 00:27:41.421 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:41.421 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.679 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.937 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:41.937 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:42.195 true 00:27:42.195 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:42.195 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.130 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.388 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:43.388 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:43.645 true 00:27:43.645 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:43.645 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.902 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.160 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:44.160 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:44.417 true 00:27:44.417 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:44.418 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.675 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.933 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:44.933 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:45.190 true 00:27:45.190 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:45.190 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.123 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.381 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:46.381 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:46.639 true 00:27:46.639 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:46.639 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.896 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.154 Initializing NVMe Controllers 00:27:47.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:47.154 Controller IO queue size 128, less than required. 00:27:47.154 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:47.154 Controller IO queue size 128, less than required. 00:27:47.154 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:47.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:47.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:47.154 Initialization complete. Launching workers. 00:27:47.154 ======================================================== 00:27:47.154 Latency(us) 00:27:47.154 Device Information : IOPS MiB/s Average min max 00:27:47.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 508.93 0.25 102610.77 3330.72 1106759.28 00:27:47.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8527.95 4.16 15010.74 1448.35 449955.12 00:27:47.154 ======================================================== 00:27:47.154 Total : 9036.89 4.41 19944.14 1448.35 1106759.28 00:27:47.154 00:27:47.154 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:47.154 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:47.412 true 00:27:47.412 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2478895 00:27:47.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2478895) - No such process 00:27:47.412 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2478895 00:27:47.412 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.669 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:47.927 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:47.927 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:47.927 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:47.927 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:47.927 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:48.185 null0 00:27:48.185 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:48.185 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:48.185 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:48.443 null1 00:27:48.443 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:48.443 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:48.443 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:48.699 null2 00:27:48.699 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:48.699 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:48.699 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:48.957 null3 00:27:48.957 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:48.957 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:48.957 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:49.214 null4 00:27:49.214 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:49.214 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:49.214 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:49.471 null5 00:27:49.471 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:49.471 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:49.471 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:50.037 null6 00:27:50.037 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:50.037 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:50.037 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:50.037 null7 00:27:50.295 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:50.295 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:50.295 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:50.295 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:50.295 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:50.295 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:50.295 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:50.295 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:50.295 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2482794 2482795 2482797 2482798 2482801 2482803 2482805 2482807 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.296 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:50.554 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.554 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:50.554 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:50.554 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:50.554 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:50.554 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:50.554 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:50.554 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.812 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.813 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:50.813 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.813 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.813 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:50.813 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.813 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.813 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:50.813 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.813 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.813 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:51.071 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:51.071 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:51.071 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:51.071 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:51.071 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:51.071 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.071 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:51.071 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.330 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:51.331 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.331 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.331 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:51.331 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.331 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.331 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:51.589 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:51.589 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:51.589 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:51.589 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:51.589 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:51.589 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:51.589 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.589 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:52.156 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:52.415 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:52.415 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:52.415 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:52.415 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:52.415 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:52.415 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.415 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:52.415 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:52.673 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:52.931 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:52.931 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:52.931 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:52.931 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:52.931 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:52.931 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:52.931 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:52.931 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.190 16:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:53.448 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:53.448 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:53.448 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:53.448 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:53.448 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.448 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:53.448 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:53.448 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:53.713 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.713 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.713 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:53.713 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.713 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.713 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:53.713 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.713 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.713 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:53.714 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.714 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.714 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:53.714 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.714 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.714 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:53.714 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.714 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.714 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:53.974 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.974 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.974 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:53.974 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:53.974 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:53.974 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:54.232 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:54.232 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:54.232 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:54.232 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:54.232 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:54.232 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:54.232 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:54.232 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:54.491 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:54.491 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:54.491 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:54.491 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:54.491 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:54.491 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:54.491 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:54.749 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:54.749 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:54.749 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:54.749 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:54.749 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.749 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:54.749 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:54.750 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.008 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:55.267 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:55.267 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:55.267 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:55.267 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.267 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:55.267 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:55.267 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:55.267 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:55.525 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:56.092 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:56.092 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:56.092 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:56.092 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.092 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:56.092 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:56.092 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:56.092 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:56.092 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:56.092 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:56.351 rmmod nvme_tcp 00:27:56.351 rmmod nvme_fabrics 00:27:56.351 rmmod nvme_keyring 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2478480 ']' 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2478480 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2478480 ']' 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2478480 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2478480 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2478480' 00:27:56.351 killing process with pid 2478480 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2478480 00:27:56.351 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2478480 00:27:56.610 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:56.610 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:56.610 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:56.610 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:56.610 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:27:56.610 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:56.610 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:27:56.610 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:56.610 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:56.610 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.610 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.610 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.512 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:58.771 00:27:58.771 real 0m47.115s 00:27:58.771 user 3m17.076s 00:27:58.771 sys 0m22.033s 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:58.771 ************************************ 00:27:58.771 END TEST nvmf_ns_hotplug_stress 00:27:58.771 ************************************ 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:58.771 ************************************ 00:27:58.771 START TEST nvmf_delete_subsystem 00:27:58.771 ************************************ 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:58.771 * Looking for test storage... 00:27:58.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:58.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.771 --rc genhtml_branch_coverage=1 00:27:58.771 --rc genhtml_function_coverage=1 00:27:58.771 --rc genhtml_legend=1 00:27:58.771 --rc geninfo_all_blocks=1 00:27:58.771 --rc geninfo_unexecuted_blocks=1 00:27:58.771 00:27:58.771 ' 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:58.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.771 --rc genhtml_branch_coverage=1 00:27:58.771 --rc genhtml_function_coverage=1 00:27:58.771 --rc genhtml_legend=1 00:27:58.771 --rc geninfo_all_blocks=1 00:27:58.771 --rc geninfo_unexecuted_blocks=1 00:27:58.771 00:27:58.771 ' 00:27:58.771 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:58.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.771 --rc genhtml_branch_coverage=1 00:27:58.771 --rc genhtml_function_coverage=1 00:27:58.771 --rc genhtml_legend=1 00:27:58.772 --rc geninfo_all_blocks=1 00:27:58.772 --rc geninfo_unexecuted_blocks=1 00:27:58.772 00:27:58.772 ' 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:58.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.772 --rc genhtml_branch_coverage=1 00:27:58.772 --rc genhtml_function_coverage=1 00:27:58.772 --rc genhtml_legend=1 00:27:58.772 --rc geninfo_all_blocks=1 00:27:58.772 --rc geninfo_unexecuted_blocks=1 00:27:58.772 00:27:58.772 ' 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.772 16:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:01.304 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:01.304 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:01.304 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:01.305 Found net devices under 0000:09:00.0: cvl_0_0 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:01.305 Found net devices under 0000:09:00.1: cvl_0_1 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:01.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:28:01.305 00:28:01.305 --- 10.0.0.2 ping statistics --- 00:28:01.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.305 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:28:01.305 00:28:01.305 --- 10.0.0.1 ping statistics --- 00:28:01.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.305 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2485665 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2485665 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2485665 ']' 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:01.305 [2024-10-17 16:56:14.619395] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:01.305 [2024-10-17 16:56:14.620448] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:28:01.305 [2024-10-17 16:56:14.620497] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.305 [2024-10-17 16:56:14.683147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:01.305 [2024-10-17 16:56:14.738893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.305 [2024-10-17 16:56:14.738944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.305 [2024-10-17 16:56:14.738968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.305 [2024-10-17 16:56:14.738979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.305 [2024-10-17 16:56:14.738989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.305 [2024-10-17 16:56:14.740292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.305 [2024-10-17 16:56:14.740312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.305 [2024-10-17 16:56:14.823172] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:01.305 [2024-10-17 16:56:14.823209] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:01.305 [2024-10-17 16:56:14.823459] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.305 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:01.305 [2024-10-17 16:56:14.872922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:01.306 [2024-10-17 16:56:14.889235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:01.306 NULL1 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:01.306 Delay0 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2485697 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:01.306 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:01.306 [2024-10-17 16:56:14.961759] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:03.833 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:03.833 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.833 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 [2024-10-17 16:56:17.165553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c570 is same with the state(6) to be set 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 starting I/O failed: -6 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Write completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.833 Read completed with error (sct=0, sc=8) 00:28:03.834 starting I/O failed: -6 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 [2024-10-17 16:56:17.166352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f31fc000c00 is same with the state(6) to be set 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Read completed with error (sct=0, sc=8) 00:28:03.834 Write completed with error (sct=0, sc=8) 00:28:04.768 [2024-10-17 16:56:18.140414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0da70 is same with the state(6) to be set 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 [2024-10-17 16:56:18.165563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f31fc00d7c0 is same with the state(6) to be set 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 [2024-10-17 16:56:18.165759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f31fc00cfe0 is same with the state(6) to be set 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 [2024-10-17 16:56:18.168215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c750 is same with the state(6) to be set 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 Write completed with error (sct=0, sc=8) 00:28:04.768 Read completed with error (sct=0, sc=8) 00:28:04.768 [2024-10-17 16:56:18.168709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0c390 is same with the state(6) to be set 00:28:04.768 Initializing NVMe Controllers 00:28:04.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:04.768 Controller IO queue size 128, less than required. 00:28:04.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:04.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:04.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:04.768 Initialization complete. Launching workers. 00:28:04.768 ======================================================== 00:28:04.768 Latency(us) 00:28:04.768 Device Information : IOPS MiB/s Average min max 00:28:04.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.69 0.08 888399.81 743.53 1011843.39 00:28:04.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.78 0.08 913804.17 405.53 1013137.42 00:28:04.768 ======================================================== 00:28:04.768 Total : 335.48 0.16 900651.03 405.53 1013137.42 00:28:04.768 00:28:04.768 [2024-10-17 16:56:18.169126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0da70 (9): Bad file descriptor 00:28:04.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:04.768 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.768 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:04.768 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2485697 00:28:04.768 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:05.026 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:05.026 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2485697 00:28:05.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2485697) - No such process 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2485697 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2485697 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2485697 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:05.027 [2024-10-17 16:56:18.689185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2486098 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2486098 00:28:05.027 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:05.285 [2024-10-17 16:56:18.749650] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:05.542 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:05.542 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2486098 00:28:05.542 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:06.107 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:06.107 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2486098 00:28:06.107 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:06.673 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:06.673 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2486098 00:28:06.673 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:07.238 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:07.238 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2486098 00:28:07.238 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:07.804 16:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:07.804 16:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2486098 00:28:07.804 16:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:08.062 16:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:08.062 16:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2486098 00:28:08.062 16:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:08.321 Initializing NVMe Controllers 00:28:08.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:08.321 Controller IO queue size 128, less than required. 00:28:08.321 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:08.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:08.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:08.321 Initialization complete. Launching workers. 00:28:08.321 ======================================================== 00:28:08.321 Latency(us) 00:28:08.321 Device Information : IOPS MiB/s Average min max 00:28:08.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004658.93 1000206.31 1043880.82 00:28:08.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004928.95 1000211.85 1011940.53 00:28:08.321 ======================================================== 00:28:08.321 Total : 256.00 0.12 1004793.94 1000206.31 1043880.82 00:28:08.321 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2486098 00:28:08.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2486098) - No such process 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2486098 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:08.614 rmmod nvme_tcp 00:28:08.614 rmmod nvme_fabrics 00:28:08.614 rmmod nvme_keyring 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:08.614 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:08.615 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2485665 ']' 00:28:08.615 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2485665 00:28:08.615 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2485665 ']' 00:28:08.615 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2485665 00:28:08.615 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2485665 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2485665' 00:28:08.900 killing process with pid 2485665 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2485665 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2485665 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.900 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:11.435 00:28:11.435 real 0m12.346s 00:28:11.435 user 0m24.668s 00:28:11.435 sys 0m3.846s 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:11.435 ************************************ 00:28:11.435 END TEST nvmf_delete_subsystem 00:28:11.435 ************************************ 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:11.435 ************************************ 00:28:11.435 START TEST nvmf_host_management 00:28:11.435 ************************************ 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:11.435 * Looking for test storage... 00:28:11.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:11.435 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:11.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.436 --rc genhtml_branch_coverage=1 00:28:11.436 --rc genhtml_function_coverage=1 00:28:11.436 --rc genhtml_legend=1 00:28:11.436 --rc geninfo_all_blocks=1 00:28:11.436 --rc geninfo_unexecuted_blocks=1 00:28:11.436 00:28:11.436 ' 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:11.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.436 --rc genhtml_branch_coverage=1 00:28:11.436 --rc genhtml_function_coverage=1 00:28:11.436 --rc genhtml_legend=1 00:28:11.436 --rc geninfo_all_blocks=1 00:28:11.436 --rc geninfo_unexecuted_blocks=1 00:28:11.436 00:28:11.436 ' 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:11.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.436 --rc genhtml_branch_coverage=1 00:28:11.436 --rc genhtml_function_coverage=1 00:28:11.436 --rc genhtml_legend=1 00:28:11.436 --rc geninfo_all_blocks=1 00:28:11.436 --rc geninfo_unexecuted_blocks=1 00:28:11.436 00:28:11.436 ' 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:11.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.436 --rc genhtml_branch_coverage=1 00:28:11.436 --rc genhtml_function_coverage=1 00:28:11.436 --rc genhtml_legend=1 00:28:11.436 --rc geninfo_all_blocks=1 00:28:11.436 --rc geninfo_unexecuted_blocks=1 00:28:11.436 00:28:11.436 ' 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:11.436 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.437 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:11.437 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:11.437 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:11.437 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.437 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.437 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.437 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:11.437 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:11.437 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:11.437 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.339 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:13.340 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:13.340 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:13.340 Found net devices under 0000:09:00.0: cvl_0_0 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:13.340 Found net devices under 0000:09:00.1: cvl_0_1 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:13.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:28:13.340 00:28:13.340 --- 10.0.0.2 ping statistics --- 00:28:13.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.340 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:28:13.340 00:28:13.340 --- 10.0.0.1 ping statistics --- 00:28:13.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.340 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:13.340 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2488556 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2488556 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2488556 ']' 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:13.340 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.341 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:13.341 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:13.600 [2024-10-17 16:56:27.069449] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:13.600 [2024-10-17 16:56:27.070566] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:28:13.600 [2024-10-17 16:56:27.070648] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.600 [2024-10-17 16:56:27.139218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.600 [2024-10-17 16:56:27.203279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.600 [2024-10-17 16:56:27.203342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.600 [2024-10-17 16:56:27.203367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.600 [2024-10-17 16:56:27.203380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.600 [2024-10-17 16:56:27.203392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.600 [2024-10-17 16:56:27.205089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.600 [2024-10-17 16:56:27.205133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.600 [2024-10-17 16:56:27.205195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:13.600 [2024-10-17 16:56:27.205198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.858 [2024-10-17 16:56:27.297202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:13.858 [2024-10-17 16:56:27.297441] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:13.858 [2024-10-17 16:56:27.297729] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:13.858 [2024-10-17 16:56:27.298313] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:13.858 [2024-10-17 16:56:27.298577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:13.858 [2024-10-17 16:56:27.349982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:13.858 Malloc0 00:28:13.858 [2024-10-17 16:56:27.430121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2488604 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2488604 /var/tmp/bdevperf.sock 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2488604 ']' 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:13.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:13.858 { 00:28:13.858 "params": { 00:28:13.858 "name": "Nvme$subsystem", 00:28:13.858 "trtype": "$TEST_TRANSPORT", 00:28:13.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.858 "adrfam": "ipv4", 00:28:13.858 "trsvcid": "$NVMF_PORT", 00:28:13.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.858 "hdgst": ${hdgst:-false}, 00:28:13.858 "ddgst": ${ddgst:-false} 00:28:13.858 }, 00:28:13.858 "method": "bdev_nvme_attach_controller" 00:28:13.858 } 00:28:13.858 EOF 00:28:13.858 )") 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:28:13.858 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:13.858 "params": { 00:28:13.858 "name": "Nvme0", 00:28:13.858 "trtype": "tcp", 00:28:13.858 "traddr": "10.0.0.2", 00:28:13.858 "adrfam": "ipv4", 00:28:13.858 "trsvcid": "4420", 00:28:13.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:13.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:13.858 "hdgst": false, 00:28:13.858 "ddgst": false 00:28:13.858 }, 00:28:13.858 "method": "bdev_nvme_attach_controller" 00:28:13.858 }' 00:28:13.858 [2024-10-17 16:56:27.506693] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:28:13.859 [2024-10-17 16:56:27.506785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488604 ] 00:28:14.116 [2024-10-17 16:56:27.567861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.116 [2024-10-17 16:56:27.627339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.116 Running I/O for 10 seconds... 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:14.375 16:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.634 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:14.634 [2024-10-17 16:56:28.194123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.634 [2024-10-17 16:56:28.194183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.634 [2024-10-17 16:56:28.194212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.634 [2024-10-17 16:56:28.194229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.634 [2024-10-17 16:56:28.194245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.634 [2024-10-17 16:56:28.194259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.634 [2024-10-17 16:56:28.194291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.634 [2024-10-17 16:56:28.194314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.634 [2024-10-17 16:56:28.194329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.634 [2024-10-17 16:56:28.194342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.634 [2024-10-17 16:56:28.194368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.634 [2024-10-17 16:56:28.194382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.194979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.194992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.635 [2024-10-17 16:56:28.195602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.635 [2024-10-17 16:56:28.195616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.195979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.195994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.196016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.196032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.196053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.196068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.196082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.196097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.636 [2024-10-17 16:56:28.196110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.196194] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1172a10 was disconnected and freed. reset controller. 00:28:14.636 [2024-10-17 16:56:28.196269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.636 [2024-10-17 16:56:28.196302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.196323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.636 [2024-10-17 16:56:28.196337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.196351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.636 [2024-10-17 16:56:28.196364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.196379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.636 [2024-10-17 16:56:28.196394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.196407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf59b00 is same with the state(6) to be set 00:28:14.636 [2024-10-17 16:56:28.197528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.636 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.636 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:14.636 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.636 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:14.636 task offset: 83328 on job bdev=Nvme0n1 fails 00:28:14.636 00:28:14.636 Latency(us) 00:28:14.636 [2024-10-17T14:56:28.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.636 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.636 Job: Nvme0n1 ended in about 0.40 seconds with error 00:28:14.636 Verification LBA range: start 0x0 length 0x400 00:28:14.636 Nvme0n1 : 0.40 1609.48 100.59 160.95 0.00 35093.12 2815.62 34758.35 00:28:14.636 [2024-10-17T14:56:28.326Z] =================================================================================================================== 00:28:14.636 [2024-10-17T14:56:28.326Z] Total : 1609.48 100.59 160.95 0.00 35093.12 2815.62 34758.35 00:28:14.636 [2024-10-17 16:56:28.199409] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:14.636 [2024-10-17 16:56:28.199438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf59b00 (9): Bad file descriptor 00:28:14.636 [2024-10-17 16:56:28.200580] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:14.636 [2024-10-17 16:56:28.200690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:14.636 [2024-10-17 16:56:28.200719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.636 [2024-10-17 16:56:28.200751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:14.636 [2024-10-17 16:56:28.200770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:14.636 [2024-10-17 16:56:28.200787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.636 [2024-10-17 16:56:28.200800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf59b00 00:28:14.636 [2024-10-17 16:56:28.200836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf59b00 (9): Bad file descriptor 00:28:14.636 [2024-10-17 16:56:28.200868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:14.636 [2024-10-17 16:56:28.200884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:14.636 [2024-10-17 16:56:28.200901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:14.636 [2024-10-17 16:56:28.200921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:14.636 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.636 16:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2488604 00:28:15.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2488604) - No such process 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:15.569 { 00:28:15.569 "params": { 00:28:15.569 "name": "Nvme$subsystem", 00:28:15.569 "trtype": "$TEST_TRANSPORT", 00:28:15.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.569 "adrfam": "ipv4", 00:28:15.569 "trsvcid": "$NVMF_PORT", 00:28:15.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.569 "hdgst": ${hdgst:-false}, 00:28:15.569 "ddgst": ${ddgst:-false} 00:28:15.569 }, 00:28:15.569 "method": "bdev_nvme_attach_controller" 00:28:15.569 } 00:28:15.569 EOF 00:28:15.569 )") 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:28:15.569 16:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:15.569 "params": { 00:28:15.569 "name": "Nvme0", 00:28:15.569 "trtype": "tcp", 00:28:15.569 "traddr": "10.0.0.2", 00:28:15.569 "adrfam": "ipv4", 00:28:15.569 "trsvcid": "4420", 00:28:15.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:15.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:15.569 "hdgst": false, 00:28:15.569 "ddgst": false 00:28:15.569 }, 00:28:15.569 "method": "bdev_nvme_attach_controller" 00:28:15.569 }' 00:28:15.569 [2024-10-17 16:56:29.254416] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:28:15.569 [2024-10-17 16:56:29.254507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488877 ] 00:28:15.827 [2024-10-17 16:56:29.313270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.827 [2024-10-17 16:56:29.371226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.085 Running I/O for 1 seconds... 00:28:17.019 1664.00 IOPS, 104.00 MiB/s 00:28:17.019 Latency(us) 00:28:17.019 [2024-10-17T14:56:30.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.019 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:17.019 Verification LBA range: start 0x0 length 0x400 00:28:17.019 Nvme0n1 : 1.03 1672.92 104.56 0.00 0.00 37641.54 6165.24 33204.91 00:28:17.019 [2024-10-17T14:56:30.709Z] =================================================================================================================== 00:28:17.019 [2024-10-17T14:56:30.709Z] Total : 1672.92 104.56 0.00 0.00 37641.54 6165.24 33204.91 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:17.277 rmmod nvme_tcp 00:28:17.277 rmmod nvme_fabrics 00:28:17.277 rmmod nvme_keyring 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2488556 ']' 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2488556 00:28:17.277 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2488556 ']' 00:28:17.278 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2488556 00:28:17.278 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:28:17.278 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:17.278 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2488556 00:28:17.278 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:17.278 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:17.278 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2488556' 00:28:17.278 killing process with pid 2488556 00:28:17.278 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2488556 00:28:17.278 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2488556 00:28:17.536 [2024-10-17 16:56:31.124693] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:17.536 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:17.536 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:17.536 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:17.536 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:17.536 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:28:17.536 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:17.536 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:28:17.536 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:17.536 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:17.536 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.536 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.536 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:20.073 00:28:20.073 real 0m8.547s 00:28:20.073 user 0m16.567s 00:28:20.073 sys 0m3.645s 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:20.073 ************************************ 00:28:20.073 END TEST nvmf_host_management 00:28:20.073 ************************************ 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:20.073 ************************************ 00:28:20.073 START TEST nvmf_lvol 00:28:20.073 ************************************ 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:20.073 * Looking for test storage... 00:28:20.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:20.073 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:20.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.074 --rc genhtml_branch_coverage=1 00:28:20.074 --rc genhtml_function_coverage=1 00:28:20.074 --rc genhtml_legend=1 00:28:20.074 --rc geninfo_all_blocks=1 00:28:20.074 --rc geninfo_unexecuted_blocks=1 00:28:20.074 00:28:20.074 ' 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:20.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.074 --rc genhtml_branch_coverage=1 00:28:20.074 --rc genhtml_function_coverage=1 00:28:20.074 --rc genhtml_legend=1 00:28:20.074 --rc geninfo_all_blocks=1 00:28:20.074 --rc geninfo_unexecuted_blocks=1 00:28:20.074 00:28:20.074 ' 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:20.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.074 --rc genhtml_branch_coverage=1 00:28:20.074 --rc genhtml_function_coverage=1 00:28:20.074 --rc genhtml_legend=1 00:28:20.074 --rc geninfo_all_blocks=1 00:28:20.074 --rc geninfo_unexecuted_blocks=1 00:28:20.074 00:28:20.074 ' 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:20.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.074 --rc genhtml_branch_coverage=1 00:28:20.074 --rc genhtml_function_coverage=1 00:28:20.074 --rc genhtml_legend=1 00:28:20.074 --rc geninfo_all_blocks=1 00:28:20.074 --rc geninfo_unexecuted_blocks=1 00:28:20.074 00:28:20.074 ' 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:20.074 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:20.075 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:20.075 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.075 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.075 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.075 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:20.075 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:20.075 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:20.075 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.975 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:21.975 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:21.976 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:21.976 Found net devices under 0000:09:00.0: cvl_0_0 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:21.976 Found net devices under 0000:09:00.1: cvl_0_1 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:21.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:28:21.976 00:28:21.976 --- 10.0.0.2 ping statistics --- 00:28:21.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.976 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:28:21.976 00:28:21.976 --- 10.0.0.1 ping statistics --- 00:28:21.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.976 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2490957 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2490957 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2490957 ']' 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:21.976 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:21.976 [2024-10-17 16:56:35.594361] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:21.977 [2024-10-17 16:56:35.595436] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:28:21.977 [2024-10-17 16:56:35.595486] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.977 [2024-10-17 16:56:35.656916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:22.235 [2024-10-17 16:56:35.714169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.235 [2024-10-17 16:56:35.714218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.235 [2024-10-17 16:56:35.714248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.235 [2024-10-17 16:56:35.714260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.235 [2024-10-17 16:56:35.714269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.235 [2024-10-17 16:56:35.715612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.235 [2024-10-17 16:56:35.715687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.235 [2024-10-17 16:56:35.715690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.235 [2024-10-17 16:56:35.803285] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:22.235 [2024-10-17 16:56:35.803479] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:22.235 [2024-10-17 16:56:35.803508] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:22.235 [2024-10-17 16:56:35.803742] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:22.235 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:22.235 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:28:22.235 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:22.235 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:22.235 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:22.235 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.235 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:22.494 [2024-10-17 16:56:36.108373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.494 16:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:22.752 16:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:22.753 16:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:23.319 16:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:23.319 16:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:23.577 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:23.835 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c858d9b9-5db2-449c-963f-8127ba31a046 00:28:23.835 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c858d9b9-5db2-449c-963f-8127ba31a046 lvol 20 00:28:24.093 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f205ec55-a8b5-4e1d-ade7-90d91a9187f6 00:28:24.093 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:24.351 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f205ec55-a8b5-4e1d-ade7-90d91a9187f6 00:28:24.609 16:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:24.866 [2024-10-17 16:56:38.424530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.867 16:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:25.125 16:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2491379 00:28:25.125 16:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:25.125 16:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:26.059 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f205ec55-a8b5-4e1d-ade7-90d91a9187f6 MY_SNAPSHOT 00:28:26.626 16:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=35e07bcc-da17-44ee-8b90-e05de4258cbd 00:28:26.626 16:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f205ec55-a8b5-4e1d-ade7-90d91a9187f6 30 00:28:26.883 16:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 35e07bcc-da17-44ee-8b90-e05de4258cbd MY_CLONE 00:28:27.141 16:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=66d7c16f-ac32-4ca3-8ef8-b8b87e9f6c30 00:28:27.141 16:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 66d7c16f-ac32-4ca3-8ef8-b8b87e9f6c30 00:28:27.708 16:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2491379 00:28:35.816 Initializing NVMe Controllers 00:28:35.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:35.816 Controller IO queue size 128, less than required. 00:28:35.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:35.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:35.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:35.816 Initialization complete. Launching workers. 00:28:35.816 ======================================================== 00:28:35.816 Latency(us) 00:28:35.816 Device Information : IOPS MiB/s Average min max 00:28:35.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10457.80 40.85 12242.70 440.54 55453.76 00:28:35.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10302.90 40.25 12427.79 2726.27 56508.56 00:28:35.816 ======================================================== 00:28:35.816 Total : 20760.70 81.10 12334.56 440.54 56508.56 00:28:35.816 00:28:35.816 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:35.816 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f205ec55-a8b5-4e1d-ade7-90d91a9187f6 00:28:36.074 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c858d9b9-5db2-449c-963f-8127ba31a046 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.332 rmmod nvme_tcp 00:28:36.332 rmmod nvme_fabrics 00:28:36.332 rmmod nvme_keyring 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2490957 ']' 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2490957 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2490957 ']' 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2490957 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:36.332 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2490957 00:28:36.591 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:36.591 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:36.591 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2490957' 00:28:36.591 killing process with pid 2490957 00:28:36.591 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2490957 00:28:36.591 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2490957 00:28:36.850 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:36.850 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:36.850 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:36.850 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:36.850 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:28:36.850 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:36.850 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:28:36.850 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.850 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:36.850 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.850 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.850 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.754 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:38.754 00:28:38.754 real 0m19.101s 00:28:38.754 user 0m56.063s 00:28:38.754 sys 0m7.796s 00:28:38.754 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:38.754 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:38.754 ************************************ 00:28:38.754 END TEST nvmf_lvol 00:28:38.754 ************************************ 00:28:38.754 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:38.754 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:38.754 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:38.754 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:38.754 ************************************ 00:28:38.754 START TEST nvmf_lvs_grow 00:28:38.754 ************************************ 00:28:38.754 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:39.013 * Looking for test storage... 00:28:39.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:39.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.013 --rc genhtml_branch_coverage=1 00:28:39.013 --rc genhtml_function_coverage=1 00:28:39.013 --rc genhtml_legend=1 00:28:39.013 --rc geninfo_all_blocks=1 00:28:39.013 --rc geninfo_unexecuted_blocks=1 00:28:39.013 00:28:39.013 ' 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:39.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.013 --rc genhtml_branch_coverage=1 00:28:39.013 --rc genhtml_function_coverage=1 00:28:39.013 --rc genhtml_legend=1 00:28:39.013 --rc geninfo_all_blocks=1 00:28:39.013 --rc geninfo_unexecuted_blocks=1 00:28:39.013 00:28:39.013 ' 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:39.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.013 --rc genhtml_branch_coverage=1 00:28:39.013 --rc genhtml_function_coverage=1 00:28:39.013 --rc genhtml_legend=1 00:28:39.013 --rc geninfo_all_blocks=1 00:28:39.013 --rc geninfo_unexecuted_blocks=1 00:28:39.013 00:28:39.013 ' 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:39.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.013 --rc genhtml_branch_coverage=1 00:28:39.013 --rc genhtml_function_coverage=1 00:28:39.013 --rc genhtml_legend=1 00:28:39.013 --rc geninfo_all_blocks=1 00:28:39.013 --rc geninfo_unexecuted_blocks=1 00:28:39.013 00:28:39.013 ' 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.013 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.014 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:40.916 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:40.916 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:40.916 Found net devices under 0000:09:00.0: cvl_0_0 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:40.916 Found net devices under 0000:09:00.1: cvl_0_1 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:40.916 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:41.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:28:41.175 00:28:41.175 --- 10.0.0.2 ping statistics --- 00:28:41.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.175 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:28:41.175 00:28:41.175 --- 10.0.0.1 ping statistics --- 00:28:41.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.175 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2494634 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2494634 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2494634 ']' 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.175 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.176 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.176 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:41.176 [2024-10-17 16:56:54.708961] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:41.176 [2024-10-17 16:56:54.710196] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:28:41.176 [2024-10-17 16:56:54.710245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.176 [2024-10-17 16:56:54.782078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.176 [2024-10-17 16:56:54.843195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.176 [2024-10-17 16:56:54.843260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.176 [2024-10-17 16:56:54.843286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.176 [2024-10-17 16:56:54.843300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.176 [2024-10-17 16:56:54.843312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.176 [2024-10-17 16:56:54.843938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.433 [2024-10-17 16:56:54.934994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:41.433 [2024-10-17 16:56:54.935345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:41.433 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:41.433 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:28:41.433 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:41.433 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:41.433 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:41.433 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.433 16:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:41.691 [2024-10-17 16:56:55.244556] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:41.691 ************************************ 00:28:41.691 START TEST lvs_grow_clean 00:28:41.691 ************************************ 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:41.691 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:41.949 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:41.949 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:42.208 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0e760b7a-9613-438d-a153-1dc800d49791 00:28:42.208 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e760b7a-9613-438d-a153-1dc800d49791 00:28:42.208 16:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:42.466 16:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:42.466 16:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:42.466 16:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0e760b7a-9613-438d-a153-1dc800d49791 lvol 150 00:28:43.032 16:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3277ea65-b25b-48e7-9b4f-e98875cf86c7 00:28:43.032 16:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:43.032 16:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:43.032 [2024-10-17 16:56:56.684465] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:43.032 [2024-10-17 16:56:56.684555] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:43.032 true 00:28:43.032 16:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e760b7a-9613-438d-a153-1dc800d49791 00:28:43.032 16:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:43.599 16:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:43.599 16:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:43.599 16:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3277ea65-b25b-48e7-9b4f-e98875cf86c7 00:28:43.857 16:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:44.115 [2024-10-17 16:56:57.776737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.115 16:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2495069 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2495069 /var/tmp/bdevperf.sock 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2495069 ']' 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:44.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:44.681 [2024-10-17 16:56:58.110227] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:28:44.681 [2024-10-17 16:56:58.110319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495069 ] 00:28:44.681 [2024-10-17 16:56:58.168328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.681 [2024-10-17 16:56:58.227960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:28:44.681 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:45.247 Nvme0n1 00:28:45.247 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:45.505 [ 00:28:45.505 { 00:28:45.505 "name": "Nvme0n1", 00:28:45.505 "aliases": [ 00:28:45.505 "3277ea65-b25b-48e7-9b4f-e98875cf86c7" 00:28:45.505 ], 00:28:45.505 "product_name": "NVMe disk", 00:28:45.505 "block_size": 4096, 00:28:45.505 "num_blocks": 38912, 00:28:45.505 "uuid": "3277ea65-b25b-48e7-9b4f-e98875cf86c7", 00:28:45.505 "numa_id": 0, 00:28:45.505 "assigned_rate_limits": { 00:28:45.505 "rw_ios_per_sec": 0, 00:28:45.505 "rw_mbytes_per_sec": 0, 00:28:45.505 "r_mbytes_per_sec": 0, 00:28:45.505 "w_mbytes_per_sec": 0 00:28:45.505 }, 00:28:45.505 "claimed": false, 00:28:45.505 "zoned": false, 00:28:45.505 "supported_io_types": { 00:28:45.505 "read": true, 00:28:45.505 "write": true, 00:28:45.505 "unmap": true, 00:28:45.505 "flush": true, 00:28:45.505 "reset": true, 00:28:45.505 "nvme_admin": true, 00:28:45.505 "nvme_io": true, 00:28:45.505 "nvme_io_md": false, 00:28:45.505 "write_zeroes": true, 00:28:45.505 "zcopy": false, 00:28:45.505 "get_zone_info": false, 00:28:45.505 "zone_management": false, 00:28:45.505 "zone_append": false, 00:28:45.505 "compare": true, 00:28:45.505 "compare_and_write": true, 00:28:45.505 "abort": true, 00:28:45.505 "seek_hole": false, 00:28:45.505 "seek_data": false, 00:28:45.505 "copy": true, 00:28:45.505 "nvme_iov_md": false 00:28:45.505 }, 00:28:45.505 "memory_domains": [ 00:28:45.505 { 00:28:45.505 "dma_device_id": "system", 00:28:45.505 "dma_device_type": 1 00:28:45.505 } 00:28:45.505 ], 00:28:45.505 "driver_specific": { 00:28:45.505 "nvme": [ 00:28:45.505 { 00:28:45.505 "trid": { 00:28:45.505 "trtype": "TCP", 00:28:45.505 "adrfam": "IPv4", 00:28:45.505 "traddr": "10.0.0.2", 00:28:45.505 "trsvcid": "4420", 00:28:45.505 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:45.505 }, 00:28:45.505 "ctrlr_data": { 00:28:45.505 "cntlid": 1, 00:28:45.505 "vendor_id": "0x8086", 00:28:45.505 "model_number": "SPDK bdev Controller", 00:28:45.505 "serial_number": "SPDK0", 00:28:45.505 "firmware_revision": "25.01", 00:28:45.505 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:45.505 "oacs": { 00:28:45.505 "security": 0, 00:28:45.505 "format": 0, 00:28:45.505 "firmware": 0, 00:28:45.505 "ns_manage": 0 00:28:45.505 }, 00:28:45.505 "multi_ctrlr": true, 00:28:45.505 "ana_reporting": false 00:28:45.505 }, 00:28:45.505 "vs": { 00:28:45.505 "nvme_version": "1.3" 00:28:45.505 }, 00:28:45.505 "ns_data": { 00:28:45.505 "id": 1, 00:28:45.505 "can_share": true 00:28:45.505 } 00:28:45.505 } 00:28:45.505 ], 00:28:45.505 "mp_policy": "active_passive" 00:28:45.505 } 00:28:45.505 } 00:28:45.505 ] 00:28:45.506 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2495201 00:28:45.506 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:45.506 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:45.506 Running I/O for 10 seconds... 00:28:46.440 Latency(us) 00:28:46.440 [2024-10-17T14:57:00.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:46.440 Nvme0n1 : 1.00 13877.00 54.21 0.00 0.00 0.00 0.00 0.00 00:28:46.440 [2024-10-17T14:57:00.130Z] =================================================================================================================== 00:28:46.440 [2024-10-17T14:57:00.130Z] Total : 13877.00 54.21 0.00 0.00 0.00 0.00 0.00 00:28:46.440 00:28:47.375 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0e760b7a-9613-438d-a153-1dc800d49791 00:28:47.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:47.681 Nvme0n1 : 2.00 13974.00 54.59 0.00 0.00 0.00 0.00 0.00 00:28:47.681 [2024-10-17T14:57:01.371Z] =================================================================================================================== 00:28:47.681 [2024-10-17T14:57:01.371Z] Total : 13974.00 54.59 0.00 0.00 0.00 0.00 0.00 00:28:47.681 00:28:47.681 true 00:28:47.681 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:47.681 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e760b7a-9613-438d-a153-1dc800d49791 00:28:47.986 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:47.986 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:47.986 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2495201 00:28:48.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:48.556 Nvme0n1 : 3.00 14195.33 55.45 0.00 0.00 0.00 0.00 0.00 00:28:48.556 [2024-10-17T14:57:02.246Z] =================================================================================================================== 00:28:48.556 [2024-10-17T14:57:02.246Z] Total : 14195.33 55.45 0.00 0.00 0.00 0.00 0.00 00:28:48.556 00:28:49.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:49.490 Nvme0n1 : 4.00 14492.75 56.61 0.00 0.00 0.00 0.00 0.00 00:28:49.490 [2024-10-17T14:57:03.180Z] =================================================================================================================== 00:28:49.490 [2024-10-17T14:57:03.180Z] Total : 14492.75 56.61 0.00 0.00 0.00 0.00 0.00 00:28:49.490 00:28:50.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:50.867 Nvme0n1 : 5.00 14495.20 56.62 0.00 0.00 0.00 0.00 0.00 00:28:50.867 [2024-10-17T14:57:04.557Z] =================================================================================================================== 00:28:50.867 [2024-10-17T14:57:04.557Z] Total : 14495.20 56.62 0.00 0.00 0.00 0.00 0.00 00:28:50.867 00:28:51.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:51.802 Nvme0n1 : 6.00 14498.33 56.63 0.00 0.00 0.00 0.00 0.00 00:28:51.802 [2024-10-17T14:57:05.492Z] =================================================================================================================== 00:28:51.802 [2024-10-17T14:57:05.492Z] Total : 14498.33 56.63 0.00 0.00 0.00 0.00 0.00 00:28:51.802 00:28:52.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.737 Nvme0n1 : 7.00 14536.43 56.78 0.00 0.00 0.00 0.00 0.00 00:28:52.737 [2024-10-17T14:57:06.427Z] =================================================================================================================== 00:28:52.737 [2024-10-17T14:57:06.427Z] Total : 14536.43 56.78 0.00 0.00 0.00 0.00 0.00 00:28:52.737 00:28:53.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:53.672 Nvme0n1 : 8.00 14659.50 57.26 0.00 0.00 0.00 0.00 0.00 00:28:53.672 [2024-10-17T14:57:07.362Z] =================================================================================================================== 00:28:53.672 [2024-10-17T14:57:07.362Z] Total : 14659.50 57.26 0.00 0.00 0.00 0.00 0.00 00:28:53.672 00:28:54.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:54.610 Nvme0n1 : 9.00 14664.67 57.28 0.00 0.00 0.00 0.00 0.00 00:28:54.610 [2024-10-17T14:57:08.300Z] =================================================================================================================== 00:28:54.610 [2024-10-17T14:57:08.300Z] Total : 14664.67 57.28 0.00 0.00 0.00 0.00 0.00 00:28:54.610 00:28:55.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:55.546 Nvme0n1 : 10.00 14769.80 57.69 0.00 0.00 0.00 0.00 0.00 00:28:55.546 [2024-10-17T14:57:09.236Z] =================================================================================================================== 00:28:55.546 [2024-10-17T14:57:09.236Z] Total : 14769.80 57.69 0.00 0.00 0.00 0.00 0.00 00:28:55.546 00:28:55.546 00:28:55.546 Latency(us) 00:28:55.546 [2024-10-17T14:57:09.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:55.546 Nvme0n1 : 10.01 14772.03 57.70 0.00 0.00 8659.01 3980.71 19320.98 00:28:55.546 [2024-10-17T14:57:09.236Z] =================================================================================================================== 00:28:55.546 [2024-10-17T14:57:09.236Z] Total : 14772.03 57.70 0.00 0.00 8659.01 3980.71 19320.98 00:28:55.546 { 00:28:55.546 "results": [ 00:28:55.546 { 00:28:55.546 "job": "Nvme0n1", 00:28:55.546 "core_mask": "0x2", 00:28:55.546 "workload": "randwrite", 00:28:55.546 "status": "finished", 00:28:55.546 "queue_depth": 128, 00:28:55.546 "io_size": 4096, 00:28:55.546 "runtime": 10.007157, 00:28:55.546 "iops": 14772.027659803878, 00:28:55.546 "mibps": 57.7032330461089, 00:28:55.546 "io_failed": 0, 00:28:55.546 "io_timeout": 0, 00:28:55.546 "avg_latency_us": 8659.01047106934, 00:28:55.546 "min_latency_us": 3980.705185185185, 00:28:55.546 "max_latency_us": 19320.983703703703 00:28:55.546 } 00:28:55.546 ], 00:28:55.546 "core_count": 1 00:28:55.546 } 00:28:55.546 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2495069 00:28:55.546 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2495069 ']' 00:28:55.546 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2495069 00:28:55.546 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:28:55.546 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:55.546 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2495069 00:28:55.546 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:55.546 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:55.546 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2495069' 00:28:55.546 killing process with pid 2495069 00:28:55.546 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2495069 00:28:55.546 Received shutdown signal, test time was about 10.000000 seconds 00:28:55.546 00:28:55.546 Latency(us) 00:28:55.546 [2024-10-17T14:57:09.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.546 [2024-10-17T14:57:09.236Z] =================================================================================================================== 00:28:55.546 [2024-10-17T14:57:09.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:55.546 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2495069 00:28:55.805 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:56.065 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:56.324 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e760b7a-9613-438d-a153-1dc800d49791 00:28:56.324 16:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:56.585 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:56.585 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:56.585 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:57.155 [2024-10-17 16:57:10.536517] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:57.155 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e760b7a-9613-438d-a153-1dc800d49791 00:28:57.155 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:28:57.155 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e760b7a-9613-438d-a153-1dc800d49791 00:28:57.155 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:57.155 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:57.155 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:57.155 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:57.155 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:57.155 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:57.155 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:57.155 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:57.155 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e760b7a-9613-438d-a153-1dc800d49791 00:28:57.155 request: 00:28:57.155 { 00:28:57.155 "uuid": "0e760b7a-9613-438d-a153-1dc800d49791", 00:28:57.155 "method": "bdev_lvol_get_lvstores", 00:28:57.155 "req_id": 1 00:28:57.155 } 00:28:57.155 Got JSON-RPC error response 00:28:57.155 response: 00:28:57.155 { 00:28:57.155 "code": -19, 00:28:57.155 "message": "No such device" 00:28:57.155 } 00:28:57.415 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:28:57.415 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:57.415 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:57.415 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:57.415 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:57.675 aio_bdev 00:28:57.675 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3277ea65-b25b-48e7-9b4f-e98875cf86c7 00:28:57.675 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=3277ea65-b25b-48e7-9b4f-e98875cf86c7 00:28:57.675 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:57.675 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:28:57.675 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:57.675 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:57.675 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:57.936 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3277ea65-b25b-48e7-9b4f-e98875cf86c7 -t 2000 00:28:58.196 [ 00:28:58.196 { 00:28:58.196 "name": "3277ea65-b25b-48e7-9b4f-e98875cf86c7", 00:28:58.196 "aliases": [ 00:28:58.196 "lvs/lvol" 00:28:58.196 ], 00:28:58.196 "product_name": "Logical Volume", 00:28:58.196 "block_size": 4096, 00:28:58.196 "num_blocks": 38912, 00:28:58.196 "uuid": "3277ea65-b25b-48e7-9b4f-e98875cf86c7", 00:28:58.196 "assigned_rate_limits": { 00:28:58.196 "rw_ios_per_sec": 0, 00:28:58.196 "rw_mbytes_per_sec": 0, 00:28:58.196 "r_mbytes_per_sec": 0, 00:28:58.196 "w_mbytes_per_sec": 0 00:28:58.196 }, 00:28:58.196 "claimed": false, 00:28:58.196 "zoned": false, 00:28:58.196 "supported_io_types": { 00:28:58.196 "read": true, 00:28:58.196 "write": true, 00:28:58.196 "unmap": true, 00:28:58.196 "flush": false, 00:28:58.196 "reset": true, 00:28:58.196 "nvme_admin": false, 00:28:58.196 "nvme_io": false, 00:28:58.196 "nvme_io_md": false, 00:28:58.196 "write_zeroes": true, 00:28:58.196 "zcopy": false, 00:28:58.196 "get_zone_info": false, 00:28:58.196 "zone_management": false, 00:28:58.196 "zone_append": false, 00:28:58.196 "compare": false, 00:28:58.196 "compare_and_write": false, 00:28:58.196 "abort": false, 00:28:58.196 "seek_hole": true, 00:28:58.196 "seek_data": true, 00:28:58.196 "copy": false, 00:28:58.196 "nvme_iov_md": false 00:28:58.196 }, 00:28:58.196 "driver_specific": { 00:28:58.196 "lvol": { 00:28:58.196 "lvol_store_uuid": "0e760b7a-9613-438d-a153-1dc800d49791", 00:28:58.196 "base_bdev": "aio_bdev", 00:28:58.196 "thin_provision": false, 00:28:58.196 "num_allocated_clusters": 38, 00:28:58.196 "snapshot": false, 00:28:58.196 "clone": false, 00:28:58.196 "esnap_clone": false 00:28:58.196 } 00:28:58.196 } 00:28:58.196 } 00:28:58.196 ] 00:28:58.196 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:28:58.196 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e760b7a-9613-438d-a153-1dc800d49791 00:28:58.196 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:58.456 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:58.456 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e760b7a-9613-438d-a153-1dc800d49791 00:28:58.456 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:58.714 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:58.714 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3277ea65-b25b-48e7-9b4f-e98875cf86c7 00:28:58.972 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e760b7a-9613-438d-a153-1dc800d49791 00:28:59.230 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:59.489 00:28:59.489 real 0m17.797s 00:28:59.489 user 0m17.373s 00:28:59.489 sys 0m1.828s 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:59.489 ************************************ 00:28:59.489 END TEST lvs_grow_clean 00:28:59.489 ************************************ 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:59.489 ************************************ 00:28:59.489 START TEST lvs_grow_dirty 00:28:59.489 ************************************ 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:59.489 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:59.749 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:59.749 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:00.319 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:00.319 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:00.319 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:00.319 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:00.319 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:00.319 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a0a0e289-09b0-449c-bc1f-63df762ed904 lvol 150 00:29:00.580 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=813dd453-444e-461b-b966-caa9c14cfe40 00:29:00.580 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:00.580 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:00.839 [2024-10-17 16:57:14.520464] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:00.839 [2024-10-17 16:57:14.520555] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:00.839 true 00:29:01.099 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:01.099 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:01.359 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:01.359 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:01.619 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 813dd453-444e-461b-b966-caa9c14cfe40 00:29:01.879 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:02.137 [2024-10-17 16:57:15.624766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.137 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:02.397 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2497851 00:29:02.397 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:02.397 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:02.397 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2497851 /var/tmp/bdevperf.sock 00:29:02.397 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2497851 ']' 00:29:02.397 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:02.397 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.397 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:02.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:02.397 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.397 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:02.397 [2024-10-17 16:57:15.956717] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:29:02.397 [2024-10-17 16:57:15.956797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497851 ] 00:29:02.397 [2024-10-17 16:57:16.016746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.397 [2024-10-17 16:57:16.079157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.657 16:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.657 16:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:29:02.657 16:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:02.915 Nvme0n1 00:29:02.915 16:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:03.174 [ 00:29:03.174 { 00:29:03.174 "name": "Nvme0n1", 00:29:03.174 "aliases": [ 00:29:03.174 "813dd453-444e-461b-b966-caa9c14cfe40" 00:29:03.174 ], 00:29:03.174 "product_name": "NVMe disk", 00:29:03.174 "block_size": 4096, 00:29:03.174 "num_blocks": 38912, 00:29:03.174 "uuid": "813dd453-444e-461b-b966-caa9c14cfe40", 00:29:03.174 "numa_id": 0, 00:29:03.174 "assigned_rate_limits": { 00:29:03.174 "rw_ios_per_sec": 0, 00:29:03.174 "rw_mbytes_per_sec": 0, 00:29:03.174 "r_mbytes_per_sec": 0, 00:29:03.174 "w_mbytes_per_sec": 0 00:29:03.174 }, 00:29:03.174 "claimed": false, 00:29:03.174 "zoned": false, 00:29:03.174 "supported_io_types": { 00:29:03.174 "read": true, 00:29:03.174 "write": true, 00:29:03.174 "unmap": true, 00:29:03.174 "flush": true, 00:29:03.174 "reset": true, 00:29:03.174 "nvme_admin": true, 00:29:03.174 "nvme_io": true, 00:29:03.174 "nvme_io_md": false, 00:29:03.174 "write_zeroes": true, 00:29:03.174 "zcopy": false, 00:29:03.174 "get_zone_info": false, 00:29:03.174 "zone_management": false, 00:29:03.174 "zone_append": false, 00:29:03.174 "compare": true, 00:29:03.174 "compare_and_write": true, 00:29:03.174 "abort": true, 00:29:03.174 "seek_hole": false, 00:29:03.174 "seek_data": false, 00:29:03.174 "copy": true, 00:29:03.174 "nvme_iov_md": false 00:29:03.174 }, 00:29:03.174 "memory_domains": [ 00:29:03.174 { 00:29:03.174 "dma_device_id": "system", 00:29:03.174 "dma_device_type": 1 00:29:03.174 } 00:29:03.174 ], 00:29:03.174 "driver_specific": { 00:29:03.174 "nvme": [ 00:29:03.174 { 00:29:03.174 "trid": { 00:29:03.174 "trtype": "TCP", 00:29:03.174 "adrfam": "IPv4", 00:29:03.174 "traddr": "10.0.0.2", 00:29:03.174 "trsvcid": "4420", 00:29:03.174 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:03.174 }, 00:29:03.174 "ctrlr_data": { 00:29:03.174 "cntlid": 1, 00:29:03.174 "vendor_id": "0x8086", 00:29:03.174 "model_number": "SPDK bdev Controller", 00:29:03.174 "serial_number": "SPDK0", 00:29:03.174 "firmware_revision": "25.01", 00:29:03.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:03.174 "oacs": { 00:29:03.174 "security": 0, 00:29:03.174 "format": 0, 00:29:03.174 "firmware": 0, 00:29:03.174 "ns_manage": 0 00:29:03.174 }, 00:29:03.174 "multi_ctrlr": true, 00:29:03.174 "ana_reporting": false 00:29:03.174 }, 00:29:03.174 "vs": { 00:29:03.174 "nvme_version": "1.3" 00:29:03.174 }, 00:29:03.174 "ns_data": { 00:29:03.174 "id": 1, 00:29:03.174 "can_share": true 00:29:03.174 } 00:29:03.174 } 00:29:03.174 ], 00:29:03.174 "mp_policy": "active_passive" 00:29:03.174 } 00:29:03.174 } 00:29:03.174 ] 00:29:03.174 16:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2497867 00:29:03.174 16:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:03.174 16:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:03.433 Running I/O for 10 seconds... 00:29:04.373 Latency(us) 00:29:04.373 [2024-10-17T14:57:18.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:04.373 Nvme0n1 : 1.00 14076.00 54.98 0.00 0.00 0.00 0.00 0.00 00:29:04.373 [2024-10-17T14:57:18.063Z] =================================================================================================================== 00:29:04.373 [2024-10-17T14:57:18.063Z] Total : 14076.00 54.98 0.00 0.00 0.00 0.00 0.00 00:29:04.373 00:29:05.311 16:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:05.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:05.311 Nvme0n1 : 2.00 14394.50 56.23 0.00 0.00 0.00 0.00 0.00 00:29:05.311 [2024-10-17T14:57:19.001Z] =================================================================================================================== 00:29:05.311 [2024-10-17T14:57:19.001Z] Total : 14394.50 56.23 0.00 0.00 0.00 0.00 0.00 00:29:05.311 00:29:05.570 true 00:29:05.570 16:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:05.570 16:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:05.830 16:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:05.830 16:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:05.830 16:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2497867 00:29:06.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:06.400 Nvme0n1 : 3.00 14456.00 56.47 0.00 0.00 0.00 0.00 0.00 00:29:06.400 [2024-10-17T14:57:20.090Z] =================================================================================================================== 00:29:06.400 [2024-10-17T14:57:20.090Z] Total : 14456.00 56.47 0.00 0.00 0.00 0.00 0.00 00:29:06.400 00:29:07.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:07.338 Nvme0n1 : 4.00 14534.00 56.77 0.00 0.00 0.00 0.00 0.00 00:29:07.338 [2024-10-17T14:57:21.028Z] =================================================================================================================== 00:29:07.338 [2024-10-17T14:57:21.028Z] Total : 14534.00 56.77 0.00 0.00 0.00 0.00 0.00 00:29:07.338 00:29:08.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:08.277 Nvme0n1 : 5.00 14580.80 56.96 0.00 0.00 0.00 0.00 0.00 00:29:08.277 [2024-10-17T14:57:21.967Z] =================================================================================================================== 00:29:08.277 [2024-10-17T14:57:21.967Z] Total : 14580.80 56.96 0.00 0.00 0.00 0.00 0.00 00:29:08.277 00:29:09.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:09.660 Nvme0n1 : 6.00 14622.83 57.12 0.00 0.00 0.00 0.00 0.00 00:29:09.660 [2024-10-17T14:57:23.350Z] =================================================================================================================== 00:29:09.660 [2024-10-17T14:57:23.350Z] Total : 14622.83 57.12 0.00 0.00 0.00 0.00 0.00 00:29:09.660 00:29:10.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:10.598 Nvme0n1 : 7.00 14698.00 57.41 0.00 0.00 0.00 0.00 0.00 00:29:10.598 [2024-10-17T14:57:24.288Z] =================================================================================================================== 00:29:10.598 [2024-10-17T14:57:24.288Z] Total : 14698.00 57.41 0.00 0.00 0.00 0.00 0.00 00:29:10.598 00:29:11.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.533 Nvme0n1 : 8.00 14678.12 57.34 0.00 0.00 0.00 0.00 0.00 00:29:11.533 [2024-10-17T14:57:25.223Z] =================================================================================================================== 00:29:11.533 [2024-10-17T14:57:25.223Z] Total : 14678.12 57.34 0.00 0.00 0.00 0.00 0.00 00:29:11.533 00:29:12.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:12.472 Nvme0n1 : 9.00 14740.78 57.58 0.00 0.00 0.00 0.00 0.00 00:29:12.472 [2024-10-17T14:57:26.162Z] =================================================================================================================== 00:29:12.472 [2024-10-17T14:57:26.162Z] Total : 14740.78 57.58 0.00 0.00 0.00 0.00 0.00 00:29:12.472 00:29:13.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:13.410 Nvme0n1 : 10.00 14718.40 57.49 0.00 0.00 0.00 0.00 0.00 00:29:13.410 [2024-10-17T14:57:27.100Z] =================================================================================================================== 00:29:13.410 [2024-10-17T14:57:27.100Z] Total : 14718.40 57.49 0.00 0.00 0.00 0.00 0.00 00:29:13.410 00:29:13.410 00:29:13.410 Latency(us) 00:29:13.410 [2024-10-17T14:57:27.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:13.410 Nvme0n1 : 10.01 14722.35 57.51 0.00 0.00 8689.38 4296.25 18350.08 00:29:13.410 [2024-10-17T14:57:27.100Z] =================================================================================================================== 00:29:13.410 [2024-10-17T14:57:27.100Z] Total : 14722.35 57.51 0.00 0.00 8689.38 4296.25 18350.08 00:29:13.410 { 00:29:13.410 "results": [ 00:29:13.410 { 00:29:13.410 "job": "Nvme0n1", 00:29:13.410 "core_mask": "0x2", 00:29:13.410 "workload": "randwrite", 00:29:13.410 "status": "finished", 00:29:13.410 "queue_depth": 128, 00:29:13.410 "io_size": 4096, 00:29:13.410 "runtime": 10.006013, 00:29:13.410 "iops": 14722.347452476826, 00:29:13.410 "mibps": 57.5091697362376, 00:29:13.410 "io_failed": 0, 00:29:13.410 "io_timeout": 0, 00:29:13.410 "avg_latency_us": 8689.377819292085, 00:29:13.410 "min_latency_us": 4296.248888888889, 00:29:13.410 "max_latency_us": 18350.08 00:29:13.410 } 00:29:13.410 ], 00:29:13.410 "core_count": 1 00:29:13.410 } 00:29:13.410 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2497851 00:29:13.410 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2497851 ']' 00:29:13.410 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2497851 00:29:13.410 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:29:13.410 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.410 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2497851 00:29:13.410 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:13.410 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:13.410 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2497851' 00:29:13.410 killing process with pid 2497851 00:29:13.410 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2497851 00:29:13.410 Received shutdown signal, test time was about 10.000000 seconds 00:29:13.410 00:29:13.410 Latency(us) 00:29:13.410 [2024-10-17T14:57:27.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.410 [2024-10-17T14:57:27.100Z] =================================================================================================================== 00:29:13.410 [2024-10-17T14:57:27.100Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.410 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2497851 00:29:13.668 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:13.926 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:14.186 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:14.186 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2494634 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2494634 00:29:14.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2494634 Killed "${NVMF_APP[@]}" "$@" 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2499184 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2499184 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2499184 ']' 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:14.447 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:14.706 [2024-10-17 16:57:28.148594] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:14.706 [2024-10-17 16:57:28.149702] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:29:14.706 [2024-10-17 16:57:28.149756] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.706 [2024-10-17 16:57:28.214047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.706 [2024-10-17 16:57:28.272652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.706 [2024-10-17 16:57:28.272710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.706 [2024-10-17 16:57:28.272723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.706 [2024-10-17 16:57:28.272733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.706 [2024-10-17 16:57:28.272743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.706 [2024-10-17 16:57:28.273297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.706 [2024-10-17 16:57:28.366977] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:14.706 [2024-10-17 16:57:28.367315] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:14.706 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:14.706 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:29:14.706 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:14.706 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:14.706 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:14.966 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.966 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:15.224 [2024-10-17 16:57:28.668327] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:15.224 [2024-10-17 16:57:28.668474] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:15.224 [2024-10-17 16:57:28.668534] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:15.224 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:15.224 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 813dd453-444e-461b-b966-caa9c14cfe40 00:29:15.224 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=813dd453-444e-461b-b966-caa9c14cfe40 00:29:15.224 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:15.224 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:29:15.224 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:15.224 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:15.224 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:15.482 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 813dd453-444e-461b-b966-caa9c14cfe40 -t 2000 00:29:15.740 [ 00:29:15.740 { 00:29:15.740 "name": "813dd453-444e-461b-b966-caa9c14cfe40", 00:29:15.740 "aliases": [ 00:29:15.740 "lvs/lvol" 00:29:15.740 ], 00:29:15.740 "product_name": "Logical Volume", 00:29:15.740 "block_size": 4096, 00:29:15.740 "num_blocks": 38912, 00:29:15.740 "uuid": "813dd453-444e-461b-b966-caa9c14cfe40", 00:29:15.740 "assigned_rate_limits": { 00:29:15.740 "rw_ios_per_sec": 0, 00:29:15.740 "rw_mbytes_per_sec": 0, 00:29:15.740 "r_mbytes_per_sec": 0, 00:29:15.740 "w_mbytes_per_sec": 0 00:29:15.740 }, 00:29:15.740 "claimed": false, 00:29:15.740 "zoned": false, 00:29:15.740 "supported_io_types": { 00:29:15.740 "read": true, 00:29:15.740 "write": true, 00:29:15.740 "unmap": true, 00:29:15.740 "flush": false, 00:29:15.740 "reset": true, 00:29:15.740 "nvme_admin": false, 00:29:15.740 "nvme_io": false, 00:29:15.740 "nvme_io_md": false, 00:29:15.740 "write_zeroes": true, 00:29:15.740 "zcopy": false, 00:29:15.740 "get_zone_info": false, 00:29:15.740 "zone_management": false, 00:29:15.740 "zone_append": false, 00:29:15.740 "compare": false, 00:29:15.740 "compare_and_write": false, 00:29:15.740 "abort": false, 00:29:15.740 "seek_hole": true, 00:29:15.740 "seek_data": true, 00:29:15.740 "copy": false, 00:29:15.740 "nvme_iov_md": false 00:29:15.740 }, 00:29:15.740 "driver_specific": { 00:29:15.740 "lvol": { 00:29:15.740 "lvol_store_uuid": "a0a0e289-09b0-449c-bc1f-63df762ed904", 00:29:15.740 "base_bdev": "aio_bdev", 00:29:15.740 "thin_provision": false, 00:29:15.740 "num_allocated_clusters": 38, 00:29:15.740 "snapshot": false, 00:29:15.740 "clone": false, 00:29:15.740 "esnap_clone": false 00:29:15.740 } 00:29:15.740 } 00:29:15.740 } 00:29:15.740 ] 00:29:15.740 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:29:15.740 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:15.740 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:15.998 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:15.998 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:15.998 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:16.259 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:16.259 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:16.519 [2024-10-17 16:57:30.025783] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:16.519 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:16.519 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:29:16.519 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:16.519 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:16.519 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:16.519 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:16.519 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:16.519 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:16.519 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:16.519 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:16.519 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:16.520 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:16.778 request: 00:29:16.778 { 00:29:16.778 "uuid": "a0a0e289-09b0-449c-bc1f-63df762ed904", 00:29:16.778 "method": "bdev_lvol_get_lvstores", 00:29:16.778 "req_id": 1 00:29:16.778 } 00:29:16.778 Got JSON-RPC error response 00:29:16.778 response: 00:29:16.778 { 00:29:16.778 "code": -19, 00:29:16.778 "message": "No such device" 00:29:16.778 } 00:29:16.778 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:29:16.778 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:16.778 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:16.778 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:16.778 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:17.036 aio_bdev 00:29:17.036 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 813dd453-444e-461b-b966-caa9c14cfe40 00:29:17.036 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=813dd453-444e-461b-b966-caa9c14cfe40 00:29:17.036 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:17.036 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:29:17.036 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:17.036 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:17.036 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:17.621 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 813dd453-444e-461b-b966-caa9c14cfe40 -t 2000 00:29:17.621 [ 00:29:17.621 { 00:29:17.621 "name": "813dd453-444e-461b-b966-caa9c14cfe40", 00:29:17.621 "aliases": [ 00:29:17.621 "lvs/lvol" 00:29:17.621 ], 00:29:17.621 "product_name": "Logical Volume", 00:29:17.621 "block_size": 4096, 00:29:17.621 "num_blocks": 38912, 00:29:17.621 "uuid": "813dd453-444e-461b-b966-caa9c14cfe40", 00:29:17.621 "assigned_rate_limits": { 00:29:17.621 "rw_ios_per_sec": 0, 00:29:17.621 "rw_mbytes_per_sec": 0, 00:29:17.621 "r_mbytes_per_sec": 0, 00:29:17.621 "w_mbytes_per_sec": 0 00:29:17.621 }, 00:29:17.621 "claimed": false, 00:29:17.621 "zoned": false, 00:29:17.621 "supported_io_types": { 00:29:17.621 "read": true, 00:29:17.621 "write": true, 00:29:17.621 "unmap": true, 00:29:17.621 "flush": false, 00:29:17.621 "reset": true, 00:29:17.621 "nvme_admin": false, 00:29:17.621 "nvme_io": false, 00:29:17.621 "nvme_io_md": false, 00:29:17.621 "write_zeroes": true, 00:29:17.621 "zcopy": false, 00:29:17.621 "get_zone_info": false, 00:29:17.621 "zone_management": false, 00:29:17.621 "zone_append": false, 00:29:17.621 "compare": false, 00:29:17.621 "compare_and_write": false, 00:29:17.621 "abort": false, 00:29:17.621 "seek_hole": true, 00:29:17.621 "seek_data": true, 00:29:17.621 "copy": false, 00:29:17.621 "nvme_iov_md": false 00:29:17.621 }, 00:29:17.621 "driver_specific": { 00:29:17.621 "lvol": { 00:29:17.621 "lvol_store_uuid": "a0a0e289-09b0-449c-bc1f-63df762ed904", 00:29:17.621 "base_bdev": "aio_bdev", 00:29:17.621 "thin_provision": false, 00:29:17.621 "num_allocated_clusters": 38, 00:29:17.621 "snapshot": false, 00:29:17.621 "clone": false, 00:29:17.621 "esnap_clone": false 00:29:17.621 } 00:29:17.621 } 00:29:17.621 } 00:29:17.621 ] 00:29:17.621 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:29:17.621 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:17.621 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:17.938 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:17.938 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:17.938 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:18.197 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:18.197 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 813dd453-444e-461b-b966-caa9c14cfe40 00:29:18.456 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a0a0e289-09b0-449c-bc1f-63df762ed904 00:29:18.717 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:18.977 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:19.237 00:29:19.237 real 0m19.552s 00:29:19.237 user 0m36.675s 00:29:19.237 sys 0m4.665s 00:29:19.237 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:19.237 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:19.237 ************************************ 00:29:19.237 END TEST lvs_grow_dirty 00:29:19.237 ************************************ 00:29:19.237 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:19.237 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:29:19.237 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:29:19.237 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:29:19.237 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:19.237 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:29:19.237 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:29:19.237 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:29:19.237 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:19.237 nvmf_trace.0 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:19.238 rmmod nvme_tcp 00:29:19.238 rmmod nvme_fabrics 00:29:19.238 rmmod nvme_keyring 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2499184 ']' 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2499184 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2499184 ']' 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2499184 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2499184 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2499184' 00:29:19.238 killing process with pid 2499184 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2499184 00:29:19.238 16:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2499184 00:29:19.497 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:19.497 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:19.497 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:19.497 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:19.497 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:29:19.497 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:19.497 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:29:19.497 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:19.497 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:19.497 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.497 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.497 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:22.034 00:29:22.034 real 0m42.743s 00:29:22.034 user 0m55.774s 00:29:22.034 sys 0m8.422s 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:22.034 ************************************ 00:29:22.034 END TEST nvmf_lvs_grow 00:29:22.034 ************************************ 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:22.034 ************************************ 00:29:22.034 START TEST nvmf_bdev_io_wait 00:29:22.034 ************************************ 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:22.034 * Looking for test storage... 00:29:22.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:22.034 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:22.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.035 --rc genhtml_branch_coverage=1 00:29:22.035 --rc genhtml_function_coverage=1 00:29:22.035 --rc genhtml_legend=1 00:29:22.035 --rc geninfo_all_blocks=1 00:29:22.035 --rc geninfo_unexecuted_blocks=1 00:29:22.035 00:29:22.035 ' 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:22.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.035 --rc genhtml_branch_coverage=1 00:29:22.035 --rc genhtml_function_coverage=1 00:29:22.035 --rc genhtml_legend=1 00:29:22.035 --rc geninfo_all_blocks=1 00:29:22.035 --rc geninfo_unexecuted_blocks=1 00:29:22.035 00:29:22.035 ' 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:22.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.035 --rc genhtml_branch_coverage=1 00:29:22.035 --rc genhtml_function_coverage=1 00:29:22.035 --rc genhtml_legend=1 00:29:22.035 --rc geninfo_all_blocks=1 00:29:22.035 --rc geninfo_unexecuted_blocks=1 00:29:22.035 00:29:22.035 ' 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:22.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.035 --rc genhtml_branch_coverage=1 00:29:22.035 --rc genhtml_function_coverage=1 00:29:22.035 --rc genhtml_legend=1 00:29:22.035 --rc geninfo_all_blocks=1 00:29:22.035 --rc geninfo_unexecuted_blocks=1 00:29:22.035 00:29:22.035 ' 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:22.035 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:23.939 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:23.939 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:23.939 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:23.940 Found net devices under 0000:09:00.0: cvl_0_0 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:23.940 Found net devices under 0000:09:00.1: cvl_0_1 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:23.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:29:23.940 00:29:23.940 --- 10.0.0.2 ping statistics --- 00:29:23.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.940 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:29:23.940 00:29:23.940 --- 10.0.0.1 ping statistics --- 00:29:23.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.940 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2501720 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2501720 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2501720 ']' 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:23.940 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.199 [2024-10-17 16:57:37.637679] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:24.199 [2024-10-17 16:57:37.638824] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:29:24.199 [2024-10-17 16:57:37.638895] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.200 [2024-10-17 16:57:37.715941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.200 [2024-10-17 16:57:37.783649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.200 [2024-10-17 16:57:37.783700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.200 [2024-10-17 16:57:37.783713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.200 [2024-10-17 16:57:37.783724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.200 [2024-10-17 16:57:37.783734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.200 [2024-10-17 16:57:37.785276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.200 [2024-10-17 16:57:37.785313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.200 [2024-10-17 16:57:37.785363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.200 [2024-10-17 16:57:37.785367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.200 [2024-10-17 16:57:37.789565] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:24.200 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.200 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:29:24.200 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:24.200 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.200 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.459 [2024-10-17 16:57:37.978547] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:24.459 [2024-10-17 16:57:37.978801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:24.459 [2024-10-17 16:57:37.979784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:24.459 [2024-10-17 16:57:37.980666] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.459 [2024-10-17 16:57:37.985770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.459 16:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.459 Malloc0 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:24.459 [2024-10-17 16:57:38.037925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2501857 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2501858 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2501860 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:24.459 { 00:29:24.459 "params": { 00:29:24.459 "name": "Nvme$subsystem", 00:29:24.459 "trtype": "$TEST_TRANSPORT", 00:29:24.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.459 "adrfam": "ipv4", 00:29:24.459 "trsvcid": "$NVMF_PORT", 00:29:24.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.459 "hdgst": ${hdgst:-false}, 00:29:24.459 "ddgst": ${ddgst:-false} 00:29:24.459 }, 00:29:24.459 "method": "bdev_nvme_attach_controller" 00:29:24.459 } 00:29:24.459 EOF 00:29:24.459 )") 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2501863 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:24.459 { 00:29:24.459 "params": { 00:29:24.459 "name": "Nvme$subsystem", 00:29:24.459 "trtype": "$TEST_TRANSPORT", 00:29:24.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.459 "adrfam": "ipv4", 00:29:24.459 "trsvcid": "$NVMF_PORT", 00:29:24.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.459 "hdgst": ${hdgst:-false}, 00:29:24.459 "ddgst": ${ddgst:-false} 00:29:24.459 }, 00:29:24.459 "method": "bdev_nvme_attach_controller" 00:29:24.459 } 00:29:24.459 EOF 00:29:24.459 )") 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:24.459 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:24.459 { 00:29:24.459 "params": { 00:29:24.459 "name": "Nvme$subsystem", 00:29:24.459 "trtype": "$TEST_TRANSPORT", 00:29:24.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.459 "adrfam": "ipv4", 00:29:24.459 "trsvcid": "$NVMF_PORT", 00:29:24.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.459 "hdgst": ${hdgst:-false}, 00:29:24.460 "ddgst": ${ddgst:-false} 00:29:24.460 }, 00:29:24.460 "method": "bdev_nvme_attach_controller" 00:29:24.460 } 00:29:24.460 EOF 00:29:24.460 )") 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:24.460 { 00:29:24.460 "params": { 00:29:24.460 "name": "Nvme$subsystem", 00:29:24.460 "trtype": "$TEST_TRANSPORT", 00:29:24.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.460 "adrfam": "ipv4", 00:29:24.460 "trsvcid": "$NVMF_PORT", 00:29:24.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.460 "hdgst": ${hdgst:-false}, 00:29:24.460 "ddgst": ${ddgst:-false} 00:29:24.460 }, 00:29:24.460 "method": "bdev_nvme_attach_controller" 00:29:24.460 } 00:29:24.460 EOF 00:29:24.460 )") 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2501857 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:24.460 "params": { 00:29:24.460 "name": "Nvme1", 00:29:24.460 "trtype": "tcp", 00:29:24.460 "traddr": "10.0.0.2", 00:29:24.460 "adrfam": "ipv4", 00:29:24.460 "trsvcid": "4420", 00:29:24.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.460 "hdgst": false, 00:29:24.460 "ddgst": false 00:29:24.460 }, 00:29:24.460 "method": "bdev_nvme_attach_controller" 00:29:24.460 }' 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:24.460 "params": { 00:29:24.460 "name": "Nvme1", 00:29:24.460 "trtype": "tcp", 00:29:24.460 "traddr": "10.0.0.2", 00:29:24.460 "adrfam": "ipv4", 00:29:24.460 "trsvcid": "4420", 00:29:24.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.460 "hdgst": false, 00:29:24.460 "ddgst": false 00:29:24.460 }, 00:29:24.460 "method": "bdev_nvme_attach_controller" 00:29:24.460 }' 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:24.460 "params": { 00:29:24.460 "name": "Nvme1", 00:29:24.460 "trtype": "tcp", 00:29:24.460 "traddr": "10.0.0.2", 00:29:24.460 "adrfam": "ipv4", 00:29:24.460 "trsvcid": "4420", 00:29:24.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.460 "hdgst": false, 00:29:24.460 "ddgst": false 00:29:24.460 }, 00:29:24.460 "method": "bdev_nvme_attach_controller" 00:29:24.460 }' 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:24.460 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:24.460 "params": { 00:29:24.460 "name": "Nvme1", 00:29:24.460 "trtype": "tcp", 00:29:24.460 "traddr": "10.0.0.2", 00:29:24.460 "adrfam": "ipv4", 00:29:24.460 "trsvcid": "4420", 00:29:24.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.460 "hdgst": false, 00:29:24.460 "ddgst": false 00:29:24.460 }, 00:29:24.460 "method": "bdev_nvme_attach_controller" 00:29:24.460 }' 00:29:24.460 [2024-10-17 16:57:38.089703] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:29:24.460 [2024-10-17 16:57:38.089776] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:24.460 [2024-10-17 16:57:38.090755] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:29:24.460 [2024-10-17 16:57:38.090754] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:29:24.460 [2024-10-17 16:57:38.090755] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:29:24.460 [2024-10-17 16:57:38.090847] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-17 16:57:38.090848] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-17 16:57:38.090848] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:24.460 --proc-type=auto ] 00:29:24.460 --proc-type=auto ] 00:29:24.718 [2024-10-17 16:57:38.266181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.718 [2024-10-17 16:57:38.321158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:24.718 [2024-10-17 16:57:38.367095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.978 [2024-10-17 16:57:38.422253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:24.978 [2024-10-17 16:57:38.467774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.978 [2024-10-17 16:57:38.524473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:24.978 [2024-10-17 16:57:38.541317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.978 [2024-10-17 16:57:38.592857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:25.237 Running I/O for 1 seconds... 00:29:25.237 Running I/O for 1 seconds... 00:29:25.237 Running I/O for 1 seconds... 00:29:25.237 Running I/O for 1 seconds... 00:29:26.169 10573.00 IOPS, 41.30 MiB/s 00:29:26.169 Latency(us) 00:29:26.169 [2024-10-17T14:57:39.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.169 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:26.169 Nvme1n1 : 1.01 10632.14 41.53 0.00 0.00 11992.62 4636.07 14369.37 00:29:26.169 [2024-10-17T14:57:39.859Z] =================================================================================================================== 00:29:26.169 [2024-10-17T14:57:39.859Z] Total : 10632.14 41.53 0.00 0.00 11992.62 4636.07 14369.37 00:29:26.169 5012.00 IOPS, 19.58 MiB/s 00:29:26.169 Latency(us) 00:29:26.169 [2024-10-17T14:57:39.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.169 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:26.169 Nvme1n1 : 1.02 5016.36 19.60 0.00 0.00 25109.41 4830.25 43302.31 00:29:26.169 [2024-10-17T14:57:39.859Z] =================================================================================================================== 00:29:26.169 [2024-10-17T14:57:39.859Z] Total : 5016.36 19.60 0.00 0.00 25109.41 4830.25 43302.31 00:29:26.169 187392.00 IOPS, 732.00 MiB/s 00:29:26.169 Latency(us) 00:29:26.169 [2024-10-17T14:57:39.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.169 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:26.169 Nvme1n1 : 1.00 187042.75 730.64 0.00 0.00 680.69 292.79 1856.85 00:29:26.169 [2024-10-17T14:57:39.859Z] =================================================================================================================== 00:29:26.169 [2024-10-17T14:57:39.859Z] Total : 187042.75 730.64 0.00 0.00 680.69 292.79 1856.85 00:29:26.427 16:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2501858 00:29:26.427 5366.00 IOPS, 20.96 MiB/s 00:29:26.427 Latency(us) 00:29:26.427 [2024-10-17T14:57:40.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.427 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:26.427 Nvme1n1 : 1.01 5465.70 21.35 0.00 0.00 23328.36 5339.97 49516.09 00:29:26.427 [2024-10-17T14:57:40.117Z] =================================================================================================================== 00:29:26.427 [2024-10-17T14:57:40.117Z] Total : 5465.70 21.35 0.00 0.00 23328.36 5339.97 49516.09 00:29:26.427 16:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2501860 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2501863 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.427 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.427 rmmod nvme_tcp 00:29:26.427 rmmod nvme_fabrics 00:29:26.687 rmmod nvme_keyring 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2501720 ']' 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2501720 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2501720 ']' 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2501720 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2501720 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2501720' 00:29:26.687 killing process with pid 2501720 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2501720 00:29:26.687 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2501720 00:29:26.948 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:26.948 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:26.948 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:26.948 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:26.948 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:29:26.948 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:26.948 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:29:26.948 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.948 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.948 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.948 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.948 16:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.856 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.856 00:29:28.856 real 0m7.271s 00:29:28.856 user 0m14.673s 00:29:28.856 sys 0m3.970s 00:29:28.856 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.856 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:28.856 ************************************ 00:29:28.856 END TEST nvmf_bdev_io_wait 00:29:28.856 ************************************ 00:29:28.856 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:28.856 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:28.856 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.856 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:28.856 ************************************ 00:29:28.856 START TEST nvmf_queue_depth 00:29:28.856 ************************************ 00:29:28.856 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:29.116 * Looking for test storage... 00:29:29.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.116 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:29.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.117 --rc genhtml_branch_coverage=1 00:29:29.117 --rc genhtml_function_coverage=1 00:29:29.117 --rc genhtml_legend=1 00:29:29.117 --rc geninfo_all_blocks=1 00:29:29.117 --rc geninfo_unexecuted_blocks=1 00:29:29.117 00:29:29.117 ' 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:29.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.117 --rc genhtml_branch_coverage=1 00:29:29.117 --rc genhtml_function_coverage=1 00:29:29.117 --rc genhtml_legend=1 00:29:29.117 --rc geninfo_all_blocks=1 00:29:29.117 --rc geninfo_unexecuted_blocks=1 00:29:29.117 00:29:29.117 ' 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:29.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.117 --rc genhtml_branch_coverage=1 00:29:29.117 --rc genhtml_function_coverage=1 00:29:29.117 --rc genhtml_legend=1 00:29:29.117 --rc geninfo_all_blocks=1 00:29:29.117 --rc geninfo_unexecuted_blocks=1 00:29:29.117 00:29:29.117 ' 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:29.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.117 --rc genhtml_branch_coverage=1 00:29:29.117 --rc genhtml_function_coverage=1 00:29:29.117 --rc genhtml_legend=1 00:29:29.117 --rc geninfo_all_blocks=1 00:29:29.117 --rc geninfo_unexecuted_blocks=1 00:29:29.117 00:29:29.117 ' 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:29.117 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:29.118 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.118 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:31.025 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:31.025 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:31.025 Found net devices under 0000:09:00.0: cvl_0_0 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:31.025 Found net devices under 0000:09:00.1: cvl_0_1 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:29:31.025 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.026 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:31.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:29:31.285 00:29:31.285 --- 10.0.0.2 ping statistics --- 00:29:31.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.285 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:29:31.285 00:29:31.285 --- 10.0.0.1 ping statistics --- 00:29:31.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.285 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:31.285 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2504077 00:29:31.286 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:31.286 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2504077 00:29:31.286 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2504077 ']' 00:29:31.286 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.286 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:31.286 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.286 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:31.286 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:31.286 [2024-10-17 16:57:44.904652] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:31.286 [2024-10-17 16:57:44.905771] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:29:31.286 [2024-10-17 16:57:44.905848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.545 [2024-10-17 16:57:44.977998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.545 [2024-10-17 16:57:45.039640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.545 [2024-10-17 16:57:45.039706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.545 [2024-10-17 16:57:45.039732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.545 [2024-10-17 16:57:45.039746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.545 [2024-10-17 16:57:45.039757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.545 [2024-10-17 16:57:45.040404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.545 [2024-10-17 16:57:45.133361] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:31.545 [2024-10-17 16:57:45.133693] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:31.545 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.545 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:29:31.545 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:31.545 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:31.545 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:31.545 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.545 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:31.545 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.545 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:31.545 [2024-10-17 16:57:45.189023] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.545 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.545 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:31.546 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.546 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:31.546 Malloc0 00:29:31.546 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.546 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.546 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.546 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:31.804 [2024-10-17 16:57:45.249179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2504106 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2504106 /var/tmp/bdevperf.sock 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2504106 ']' 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:31.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:31.804 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:31.805 [2024-10-17 16:57:45.301059] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:29:31.805 [2024-10-17 16:57:45.301148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504106 ] 00:29:31.805 [2024-10-17 16:57:45.358349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.805 [2024-10-17 16:57:45.417291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.063 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:32.063 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:29:32.063 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:32.064 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.064 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:32.322 NVMe0n1 00:29:32.322 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.322 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:32.322 Running I/O for 10 seconds... 00:29:34.194 8192.00 IOPS, 32.00 MiB/s [2024-10-17T14:57:49.261Z] 8192.00 IOPS, 32.00 MiB/s [2024-10-17T14:57:50.199Z] 8200.33 IOPS, 32.03 MiB/s [2024-10-17T14:57:51.137Z] 8281.75 IOPS, 32.35 MiB/s [2024-10-17T14:57:52.074Z] 8369.20 IOPS, 32.69 MiB/s [2024-10-17T14:57:53.011Z] 8366.67 IOPS, 32.68 MiB/s [2024-10-17T14:57:53.949Z] 8392.14 IOPS, 32.78 MiB/s [2024-10-17T14:57:55.329Z] 8404.62 IOPS, 32.83 MiB/s [2024-10-17T14:57:56.263Z] 8414.22 IOPS, 32.87 MiB/s [2024-10-17T14:57:56.263Z] 8401.00 IOPS, 32.82 MiB/s 00:29:42.573 Latency(us) 00:29:42.573 [2024-10-17T14:57:56.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.573 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:42.573 Verification LBA range: start 0x0 length 0x4000 00:29:42.573 NVMe0n1 : 10.09 8424.64 32.91 0.00 0.00 121043.09 21456.97 70293.43 00:29:42.573 [2024-10-17T14:57:56.263Z] =================================================================================================================== 00:29:42.573 [2024-10-17T14:57:56.263Z] Total : 8424.64 32.91 0.00 0.00 121043.09 21456.97 70293.43 00:29:42.573 { 00:29:42.573 "results": [ 00:29:42.573 { 00:29:42.573 "job": "NVMe0n1", 00:29:42.573 "core_mask": "0x1", 00:29:42.573 "workload": "verify", 00:29:42.573 "status": "finished", 00:29:42.573 "verify_range": { 00:29:42.573 "start": 0, 00:29:42.573 "length": 16384 00:29:42.573 }, 00:29:42.573 "queue_depth": 1024, 00:29:42.573 "io_size": 4096, 00:29:42.573 "runtime": 10.091471, 00:29:42.573 "iops": 8424.63898474266, 00:29:42.573 "mibps": 32.908746034151015, 00:29:42.573 "io_failed": 0, 00:29:42.573 "io_timeout": 0, 00:29:42.573 "avg_latency_us": 121043.09189313336, 00:29:42.573 "min_latency_us": 21456.971851851853, 00:29:42.573 "max_latency_us": 70293.42814814814 00:29:42.573 } 00:29:42.573 ], 00:29:42.573 "core_count": 1 00:29:42.573 } 00:29:42.573 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2504106 00:29:42.573 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2504106 ']' 00:29:42.573 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2504106 00:29:42.573 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2504106 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2504106' 00:29:42.574 killing process with pid 2504106 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2504106 00:29:42.574 Received shutdown signal, test time was about 10.000000 seconds 00:29:42.574 00:29:42.574 Latency(us) 00:29:42.574 [2024-10-17T14:57:56.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.574 [2024-10-17T14:57:56.264Z] =================================================================================================================== 00:29:42.574 [2024-10-17T14:57:56.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2504106 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.574 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.574 rmmod nvme_tcp 00:29:42.833 rmmod nvme_fabrics 00:29:42.833 rmmod nvme_keyring 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2504077 ']' 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2504077 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2504077 ']' 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2504077 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2504077 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2504077' 00:29:42.833 killing process with pid 2504077 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2504077 00:29:42.833 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2504077 00:29:43.094 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:43.094 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:43.094 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:43.094 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:43.094 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:29:43.094 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:43.094 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:29:43.094 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:43.094 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:43.094 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.094 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.094 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.008 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:45.008 00:29:45.008 real 0m16.124s 00:29:45.008 user 0m22.369s 00:29:45.008 sys 0m3.330s 00:29:45.008 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:45.008 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:45.008 ************************************ 00:29:45.008 END TEST nvmf_queue_depth 00:29:45.008 ************************************ 00:29:45.008 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:45.008 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:45.008 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:45.008 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:45.008 ************************************ 00:29:45.008 START TEST nvmf_target_multipath 00:29:45.008 ************************************ 00:29:45.008 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:45.267 * Looking for test storage... 00:29:45.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:45.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.267 --rc genhtml_branch_coverage=1 00:29:45.267 --rc genhtml_function_coverage=1 00:29:45.267 --rc genhtml_legend=1 00:29:45.267 --rc geninfo_all_blocks=1 00:29:45.267 --rc geninfo_unexecuted_blocks=1 00:29:45.267 00:29:45.267 ' 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:45.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.267 --rc genhtml_branch_coverage=1 00:29:45.267 --rc genhtml_function_coverage=1 00:29:45.267 --rc genhtml_legend=1 00:29:45.267 --rc geninfo_all_blocks=1 00:29:45.267 --rc geninfo_unexecuted_blocks=1 00:29:45.267 00:29:45.267 ' 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:45.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.267 --rc genhtml_branch_coverage=1 00:29:45.267 --rc genhtml_function_coverage=1 00:29:45.267 --rc genhtml_legend=1 00:29:45.267 --rc geninfo_all_blocks=1 00:29:45.267 --rc geninfo_unexecuted_blocks=1 00:29:45.267 00:29:45.267 ' 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:45.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.267 --rc genhtml_branch_coverage=1 00:29:45.267 --rc genhtml_function_coverage=1 00:29:45.267 --rc genhtml_legend=1 00:29:45.267 --rc geninfo_all_blocks=1 00:29:45.267 --rc geninfo_unexecuted_blocks=1 00:29:45.267 00:29:45.267 ' 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.267 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.268 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:47.803 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.803 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:47.804 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:47.804 Found net devices under 0000:09:00.0: cvl_0_0 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:47.804 Found net devices under 0000:09:00.1: cvl_0_1 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:29:47.804 00:29:47.804 --- 10.0.0.2 ping statistics --- 00:29:47.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.804 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:29:47.804 00:29:47.804 --- 10.0.0.1 ping statistics --- 00:29:47.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.804 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:47.804 only one NIC for nvmf test 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.804 rmmod nvme_tcp 00:29:47.804 rmmod nvme_fabrics 00:29:47.804 rmmod nvme_keyring 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.804 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.706 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:49.706 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:29:49.706 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:29:49.706 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:49.706 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:49.706 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.706 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:49.706 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:49.707 00:29:49.707 real 0m4.598s 00:29:49.707 user 0m0.956s 00:29:49.707 sys 0m1.663s 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:49.707 ************************************ 00:29:49.707 END TEST nvmf_target_multipath 00:29:49.707 ************************************ 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:49.707 ************************************ 00:29:49.707 START TEST nvmf_zcopy 00:29:49.707 ************************************ 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:49.707 * Looking for test storage... 00:29:49.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:29:49.707 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:49.966 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:49.966 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.966 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.966 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.966 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.966 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.966 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:49.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.967 --rc genhtml_branch_coverage=1 00:29:49.967 --rc genhtml_function_coverage=1 00:29:49.967 --rc genhtml_legend=1 00:29:49.967 --rc geninfo_all_blocks=1 00:29:49.967 --rc geninfo_unexecuted_blocks=1 00:29:49.967 00:29:49.967 ' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:49.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.967 --rc genhtml_branch_coverage=1 00:29:49.967 --rc genhtml_function_coverage=1 00:29:49.967 --rc genhtml_legend=1 00:29:49.967 --rc geninfo_all_blocks=1 00:29:49.967 --rc geninfo_unexecuted_blocks=1 00:29:49.967 00:29:49.967 ' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:49.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.967 --rc genhtml_branch_coverage=1 00:29:49.967 --rc genhtml_function_coverage=1 00:29:49.967 --rc genhtml_legend=1 00:29:49.967 --rc geninfo_all_blocks=1 00:29:49.967 --rc geninfo_unexecuted_blocks=1 00:29:49.967 00:29:49.967 ' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:49.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.967 --rc genhtml_branch_coverage=1 00:29:49.967 --rc genhtml_function_coverage=1 00:29:49.967 --rc genhtml_legend=1 00:29:49.967 --rc geninfo_all_blocks=1 00:29:49.967 --rc geninfo_unexecuted_blocks=1 00:29:49.967 00:29:49.967 ' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:29:49.967 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:52.504 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:52.504 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:52.504 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:52.505 Found net devices under 0000:09:00.0: cvl_0_0 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:52.505 Found net devices under 0000:09:00.1: cvl_0_1 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:29:52.505 00:29:52.505 --- 10.0.0.2 ping statistics --- 00:29:52.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.505 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:29:52.505 00:29:52.505 --- 10.0.0.1 ping statistics --- 00:29:52.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.505 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2509277 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2509277 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2509277 ']' 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:52.505 16:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:52.505 [2024-10-17 16:58:05.800705] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:52.505 [2024-10-17 16:58:05.801806] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:29:52.505 [2024-10-17 16:58:05.801887] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.505 [2024-10-17 16:58:05.869789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.505 [2024-10-17 16:58:05.931227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.505 [2024-10-17 16:58:05.931287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.505 [2024-10-17 16:58:05.931313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.505 [2024-10-17 16:58:05.931327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.505 [2024-10-17 16:58:05.931346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.505 [2024-10-17 16:58:05.931973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.505 [2024-10-17 16:58:06.022095] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:52.505 [2024-10-17 16:58:06.022447] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:52.505 [2024-10-17 16:58:06.072661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:52.505 [2024-10-17 16:58:06.088824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.505 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:52.506 malloc0 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:52.506 { 00:29:52.506 "params": { 00:29:52.506 "name": "Nvme$subsystem", 00:29:52.506 "trtype": "$TEST_TRANSPORT", 00:29:52.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.506 "adrfam": "ipv4", 00:29:52.506 "trsvcid": "$NVMF_PORT", 00:29:52.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.506 "hdgst": ${hdgst:-false}, 00:29:52.506 "ddgst": ${ddgst:-false} 00:29:52.506 }, 00:29:52.506 "method": "bdev_nvme_attach_controller" 00:29:52.506 } 00:29:52.506 EOF 00:29:52.506 )") 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:29:52.506 16:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:52.506 "params": { 00:29:52.506 "name": "Nvme1", 00:29:52.506 "trtype": "tcp", 00:29:52.506 "traddr": "10.0.0.2", 00:29:52.506 "adrfam": "ipv4", 00:29:52.506 "trsvcid": "4420", 00:29:52.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:52.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:52.506 "hdgst": false, 00:29:52.506 "ddgst": false 00:29:52.506 }, 00:29:52.506 "method": "bdev_nvme_attach_controller" 00:29:52.506 }' 00:29:52.506 [2024-10-17 16:58:06.166185] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:29:52.506 [2024-10-17 16:58:06.166276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2509301 ] 00:29:52.764 [2024-10-17 16:58:06.227992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.764 [2024-10-17 16:58:06.290827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.022 Running I/O for 10 seconds... 00:29:54.975 5229.00 IOPS, 40.85 MiB/s [2024-10-17T14:58:09.604Z] 5328.00 IOPS, 41.62 MiB/s [2024-10-17T14:58:10.542Z] 5329.67 IOPS, 41.64 MiB/s [2024-10-17T14:58:11.923Z] 5328.25 IOPS, 41.63 MiB/s [2024-10-17T14:58:12.862Z] 5349.80 IOPS, 41.80 MiB/s [2024-10-17T14:58:13.802Z] 5350.33 IOPS, 41.80 MiB/s [2024-10-17T14:58:14.740Z] 5368.00 IOPS, 41.94 MiB/s [2024-10-17T14:58:15.679Z] 5365.75 IOPS, 41.92 MiB/s [2024-10-17T14:58:16.619Z] 5378.22 IOPS, 42.02 MiB/s [2024-10-17T14:58:16.619Z] 5375.50 IOPS, 42.00 MiB/s 00:30:02.929 Latency(us) 00:30:02.929 [2024-10-17T14:58:16.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.929 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:02.929 Verification LBA range: start 0x0 length 0x1000 00:30:02.929 Nvme1n1 : 10.02 5378.84 42.02 0.00 0.00 23731.52 2985.53 31651.46 00:30:02.929 [2024-10-17T14:58:16.619Z] =================================================================================================================== 00:30:02.929 [2024-10-17T14:58:16.619Z] Total : 5378.84 42.02 0.00 0.00 23731.52 2985.53 31651.46 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2510600 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:03.189 { 00:30:03.189 "params": { 00:30:03.189 "name": "Nvme$subsystem", 00:30:03.189 "trtype": "$TEST_TRANSPORT", 00:30:03.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.189 "adrfam": "ipv4", 00:30:03.189 "trsvcid": "$NVMF_PORT", 00:30:03.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.189 "hdgst": ${hdgst:-false}, 00:30:03.189 "ddgst": ${ddgst:-false} 00:30:03.189 }, 00:30:03.189 "method": "bdev_nvme_attach_controller" 00:30:03.189 } 00:30:03.189 EOF 00:30:03.189 )") 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:30:03.189 [2024-10-17 16:58:16.776561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.189 [2024-10-17 16:58:16.776610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:30:03.189 16:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:03.189 "params": { 00:30:03.189 "name": "Nvme1", 00:30:03.189 "trtype": "tcp", 00:30:03.189 "traddr": "10.0.0.2", 00:30:03.189 "adrfam": "ipv4", 00:30:03.189 "trsvcid": "4420", 00:30:03.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.189 "hdgst": false, 00:30:03.189 "ddgst": false 00:30:03.189 }, 00:30:03.189 "method": "bdev_nvme_attach_controller" 00:30:03.189 }' 00:30:03.189 [2024-10-17 16:58:16.784496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.189 [2024-10-17 16:58:16.784523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.189 [2024-10-17 16:58:16.792496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.189 [2024-10-17 16:58:16.792521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.189 [2024-10-17 16:58:16.800493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.189 [2024-10-17 16:58:16.800516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.189 [2024-10-17 16:58:16.808491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.189 [2024-10-17 16:58:16.808514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.189 [2024-10-17 16:58:16.816490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.189 [2024-10-17 16:58:16.816514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.189 [2024-10-17 16:58:16.822123] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:30:03.189 [2024-10-17 16:58:16.822200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2510600 ] 00:30:03.189 [2024-10-17 16:58:16.824493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.189 [2024-10-17 16:58:16.824517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.190 [2024-10-17 16:58:16.832494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.190 [2024-10-17 16:58:16.832518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.190 [2024-10-17 16:58:16.840486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.190 [2024-10-17 16:58:16.840508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.190 [2024-10-17 16:58:16.848481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.190 [2024-10-17 16:58:16.848502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.190 [2024-10-17 16:58:16.856493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.190 [2024-10-17 16:58:16.856517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.190 [2024-10-17 16:58:16.864494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.190 [2024-10-17 16:58:16.864518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.190 [2024-10-17 16:58:16.872496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.190 [2024-10-17 16:58:16.872516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.880484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.880505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.885399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.450 [2024-10-17 16:58:16.888493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.888516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.896538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.896578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.904517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.904551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.912495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.912519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.920493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.920516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.928493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.928517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.936492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.936516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.944497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.944522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.952494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.952518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.953049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.450 [2024-10-17 16:58:16.960495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.960520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.968527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.968562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.976537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.976580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.984534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.984576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:16.992537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:16.992580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.000537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.000580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.008537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.008579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.016537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.016581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.024495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.024519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.032533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.032574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.040534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.040579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.048536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.048578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.056493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.056518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.064501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.064526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.072502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.072532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.080502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.080531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.088500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.088528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.096501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.096528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.104498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.104524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.112494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.112519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.120494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.120518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.128493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.128518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.450 [2024-10-17 16:58:17.136494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.450 [2024-10-17 16:58:17.136518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.144499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.144528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.152499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.152528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.160501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.160529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.169585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.169617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.176503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.176531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 Running I/O for 5 seconds... 00:30:03.726 [2024-10-17 16:58:17.184503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.184533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.198727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.198761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.211126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.211154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.225483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.225515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.235220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.235248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.248469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.248500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.260453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.260486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.272292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.272323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.284549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.284580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.296533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.296565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.308080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.308108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.320198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.320226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.332091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.332129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.344153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.344195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.356315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.356347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.368186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.368214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.379941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.379972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.392231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.392259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.404485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.404516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.726 [2024-10-17 16:58:17.416085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.726 [2024-10-17 16:58:17.416113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.986 [2024-10-17 16:58:17.428454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.428487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.439425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.439456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.451852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.451882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.463641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.463672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.475760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.475792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.488576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.488607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.500780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.500810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.513265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.513309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.530233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.530261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.546646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.546677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.557820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.557852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.571387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.571430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.584148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.584177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.596130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.596166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.608512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.608543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.621178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.621205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.637638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.637670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.648232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.648260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.661467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.661499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:03.987 [2024-10-17 16:58:17.673293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:03.987 [2024-10-17 16:58:17.673325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.685384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.685415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.697737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.697768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.709956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.709988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.721568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.721600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.733464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.733495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.744948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.744979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.757008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.757050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.767747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.767777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.780517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.780548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.792679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.792711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.805046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.805090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.816760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.816801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.829920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.829951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.846616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.846650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.861183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.861211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.871982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.872021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.885389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.885420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.897336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.897367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.909515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.909546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.921153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.921180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.246 [2024-10-17 16:58:17.931822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.246 [2024-10-17 16:58:17.931854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:17.944851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:17.944883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:17.957086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:17.957114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:17.969110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:17.969136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:17.979741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:17.979771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:17.992931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:17.992961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:18.003718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:18.003749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:18.016921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:18.016953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:18.027519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:18.027550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:18.040893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:18.040924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:18.057978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:18.058018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:18.067981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:18.068021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:18.081406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:18.081436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.507 [2024-10-17 16:58:18.093788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.507 [2024-10-17 16:58:18.093818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.508 [2024-10-17 16:58:18.110247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.508 [2024-10-17 16:58:18.110288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.508 [2024-10-17 16:58:18.126701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.508 [2024-10-17 16:58:18.126733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.508 [2024-10-17 16:58:18.141171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.508 [2024-10-17 16:58:18.141199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.508 [2024-10-17 16:58:18.151475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.508 [2024-10-17 16:58:18.151506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.508 [2024-10-17 16:58:18.163686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.508 [2024-10-17 16:58:18.163716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.508 [2024-10-17 16:58:18.175493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.508 [2024-10-17 16:58:18.175523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.508 [2024-10-17 16:58:18.187480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.508 [2024-10-17 16:58:18.187511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 10528.00 IOPS, 82.25 MiB/s [2024-10-17T14:58:18.458Z] [2024-10-17 16:58:18.199875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.199908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.212087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.212130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.223554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.223584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.235314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.235345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.247580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.247612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.259664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.259694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.271608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.271639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.283849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.283879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.297393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.297423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.308012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.308066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.321067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.321096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.332555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.332585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.345399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.345429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.362957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.362988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.376406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.376437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.387074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.387101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.400107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.400135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.412152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.412188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.423508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.423538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.435414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.435444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:04.768 [2024-10-17 16:58:18.447933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:04.768 [2024-10-17 16:58:18.447964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.459849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.459880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.471751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.471781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.483910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.483940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.496131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.496159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.507946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.507978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.520104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.520132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.532791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.532822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.544884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.544914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.562428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.562459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.579510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.579540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.589736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.589767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.602712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.602743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.618881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.618913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.629464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.629494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.642384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.642415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.654354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.654385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.666225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.666252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.678228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.678254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.690374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.690405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.702347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.702378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.029 [2024-10-17 16:58:18.714441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.029 [2024-10-17 16:58:18.714472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.726392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.726423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.739120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.739146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.753127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.753161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.763574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.763604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.777332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.777376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.793593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.793624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.804131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.804158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.817201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.817227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.828121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.828159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.841224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.841251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.853334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.853364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.870912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.870942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.885212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.885239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.895373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.895403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.908751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.908781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.927012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.927056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.937711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.937741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.954887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.954918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.288 [2024-10-17 16:58:18.966953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.288 [2024-10-17 16:58:18.966983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:18.981553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:18.981583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:18.992018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:18.992062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.004765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.004804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.021948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.021980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.032516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.032547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.045471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.045502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.061104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.061132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.070799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.070829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.083579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.083610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.095527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.095557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.107478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.107508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.123254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.123298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.134669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.134700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.146931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.146961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.158602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.158633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.176324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.176355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.188128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.188155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 10543.50 IOPS, 82.37 MiB/s [2024-10-17T14:58:19.237Z] [2024-10-17 16:58:19.200412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.200442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.212594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.212623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.224575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.224606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.547 [2024-10-17 16:58:19.236863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.547 [2024-10-17 16:58:19.236893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.805 [2024-10-17 16:58:19.253892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.805 [2024-10-17 16:58:19.253930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.805 [2024-10-17 16:58:19.264243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.805 [2024-10-17 16:58:19.264269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.805 [2024-10-17 16:58:19.277483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.805 [2024-10-17 16:58:19.277514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.805 [2024-10-17 16:58:19.289729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.805 [2024-10-17 16:58:19.289759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.805 [2024-10-17 16:58:19.306174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.805 [2024-10-17 16:58:19.306200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.805 [2024-10-17 16:58:19.321131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.805 [2024-10-17 16:58:19.321158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.805 [2024-10-17 16:58:19.331443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.805 [2024-10-17 16:58:19.331473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.805 [2024-10-17 16:58:19.345140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.805 [2024-10-17 16:58:19.345166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.805 [2024-10-17 16:58:19.355271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.805 [2024-10-17 16:58:19.355317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.805 [2024-10-17 16:58:19.368475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.805 [2024-10-17 16:58:19.368506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.805 [2024-10-17 16:58:19.380154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.805 [2024-10-17 16:58:19.380181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.805 [2024-10-17 16:58:19.392341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.805 [2024-10-17 16:58:19.392370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.806 [2024-10-17 16:58:19.404681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.806 [2024-10-17 16:58:19.404711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.806 [2024-10-17 16:58:19.416544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.806 [2024-10-17 16:58:19.416574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.806 [2024-10-17 16:58:19.428315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.806 [2024-10-17 16:58:19.428345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.806 [2024-10-17 16:58:19.440169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.806 [2024-10-17 16:58:19.440197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.806 [2024-10-17 16:58:19.452429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.806 [2024-10-17 16:58:19.452469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.806 [2024-10-17 16:58:19.464396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.806 [2024-10-17 16:58:19.464429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.806 [2024-10-17 16:58:19.476816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.806 [2024-10-17 16:58:19.476846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:05.806 [2024-10-17 16:58:19.488652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:05.806 [2024-10-17 16:58:19.488682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.500907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.500938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.517133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.517161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.527854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.527883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.541197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.541224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.553321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.553351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.565527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.565557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.582675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.582705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.596136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.596163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.607337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.607368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.618997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.619050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.634564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.634595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.649829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.649858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.660420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.660450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.672372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.672404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.684602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.684632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.696946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.696976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.707914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.707944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.721322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.721352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.738107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.738134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.064 [2024-10-17 16:58:19.749066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.064 [2024-10-17 16:58:19.749093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.762241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.762267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.774021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.774064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.791261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.791303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.803372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.803402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.818394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.818425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.834088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.834116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.844622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.844652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.857643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.857673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.874630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.874660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.886420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.886450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.898289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.898319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.910594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.910624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.927259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.927303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.938434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.938464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.950598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.950628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.966491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.966521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.977495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.977525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:19.990510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:19.990539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.323 [2024-10-17 16:58:20.002820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.323 [2024-10-17 16:58:20.002860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.016056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.016089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.028086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.028120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.040354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.040397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.052481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.052514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.064408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.064440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.076338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.076365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.088679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.088705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.100457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.100488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.112192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.112219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.124313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.124346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.135985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.136029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.150091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.150119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.160948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.160979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.173461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.173491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.185950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.185980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 10540.33 IOPS, 82.35 MiB/s [2024-10-17T14:58:20.272Z] [2024-10-17 16:58:20.197925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.197955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.209868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.209909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.221795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.221826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.238560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.238591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.253714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.253745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.582 [2024-10-17 16:58:20.263423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.582 [2024-10-17 16:58:20.263453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.841 [2024-10-17 16:58:20.276823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.841 [2024-10-17 16:58:20.276852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.841 [2024-10-17 16:58:20.294524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.841 [2024-10-17 16:58:20.294555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.841 [2024-10-17 16:58:20.308128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.841 [2024-10-17 16:58:20.308156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.841 [2024-10-17 16:58:20.318397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.841 [2024-10-17 16:58:20.318428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.841 [2024-10-17 16:58:20.333817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.841 [2024-10-17 16:58:20.333846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.841 [2024-10-17 16:58:20.344555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.841 [2024-10-17 16:58:20.344586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.841 [2024-10-17 16:58:20.357270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.841 [2024-10-17 16:58:20.357317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.841 [2024-10-17 16:58:20.369430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.841 [2024-10-17 16:58:20.369459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.841 [2024-10-17 16:58:20.381526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.841 [2024-10-17 16:58:20.381556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.841 [2024-10-17 16:58:20.393241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.841 [2024-10-17 16:58:20.393269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.841 [2024-10-17 16:58:20.405963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.841 [2024-10-17 16:58:20.405994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.841 [2024-10-17 16:58:20.422869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.841 [2024-10-17 16:58:20.422900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.842 [2024-10-17 16:58:20.433890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.842 [2024-10-17 16:58:20.433922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.842 [2024-10-17 16:58:20.450521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.842 [2024-10-17 16:58:20.450552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.842 [2024-10-17 16:58:20.466066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.842 [2024-10-17 16:58:20.466102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.842 [2024-10-17 16:58:20.476635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.842 [2024-10-17 16:58:20.476666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.842 [2024-10-17 16:58:20.489019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.842 [2024-10-17 16:58:20.489063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.842 [2024-10-17 16:58:20.505448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.842 [2024-10-17 16:58:20.505479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.842 [2024-10-17 16:58:20.515446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.842 [2024-10-17 16:58:20.515478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:06.842 [2024-10-17 16:58:20.528488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:06.842 [2024-10-17 16:58:20.528520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.100 [2024-10-17 16:58:20.540432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.100 [2024-10-17 16:58:20.540463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.100 [2024-10-17 16:58:20.552459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.100 [2024-10-17 16:58:20.552489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.100 [2024-10-17 16:58:20.564559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.100 [2024-10-17 16:58:20.564590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.100 [2024-10-17 16:58:20.576929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.100 [2024-10-17 16:58:20.576959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.100 [2024-10-17 16:58:20.587871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.100 [2024-10-17 16:58:20.587902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.600970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.601015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.611506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.611538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.623867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.623898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.635838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.635868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.647684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.647715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.659911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.659941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.671941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.671970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.684322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.684353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.695777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.695815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.708098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.708126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.719838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.719868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.731703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.731734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.746412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.746442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.762443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.762473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.773528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.773559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.101 [2024-10-17 16:58:20.789463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.101 [2024-10-17 16:58:20.789490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.801060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.801086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.811776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.811807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.824159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.824187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.836685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.836716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.854045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.854080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.869234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.869262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.880167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.880195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.893084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.893112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.904543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.904571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.915805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.915830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.928412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.928439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.938460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.938496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.950632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.950659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.961970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.961997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.973115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.973142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.984589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.984614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:20.997189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:20.997216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:21.006886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:21.006913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.359 [2024-10-17 16:58:21.019238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.359 [2024-10-17 16:58:21.019266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.360 [2024-10-17 16:58:21.033984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.360 [2024-10-17 16:58:21.034025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.360 [2024-10-17 16:58:21.043654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.360 [2024-10-17 16:58:21.043681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.618 [2024-10-17 16:58:21.055633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.618 [2024-10-17 16:58:21.055661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.618 [2024-10-17 16:58:21.070896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.618 [2024-10-17 16:58:21.070923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.618 [2024-10-17 16:58:21.085696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.618 [2024-10-17 16:58:21.085724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.618 [2024-10-17 16:58:21.095485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.618 [2024-10-17 16:58:21.095513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.618 [2024-10-17 16:58:21.107449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.618 [2024-10-17 16:58:21.107474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.618 [2024-10-17 16:58:21.122382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.618 [2024-10-17 16:58:21.122409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.618 [2024-10-17 16:58:21.132732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.618 [2024-10-17 16:58:21.132759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.618 [2024-10-17 16:58:21.144873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.618 [2024-10-17 16:58:21.144900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 [2024-10-17 16:58:21.161317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.161344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 [2024-10-17 16:58:21.171151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.171178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 [2024-10-17 16:58:21.182914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.182940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 10615.50 IOPS, 82.93 MiB/s [2024-10-17T14:58:21.309Z] [2024-10-17 16:58:21.197463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.197490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 [2024-10-17 16:58:21.207184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.207211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 [2024-10-17 16:58:21.219288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.219331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 [2024-10-17 16:58:21.234039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.234067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 [2024-10-17 16:58:21.243878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.243906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 [2024-10-17 16:58:21.255961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.255988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 [2024-10-17 16:58:21.268630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.268657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 [2024-10-17 16:58:21.278150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.278177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 [2024-10-17 16:58:21.293695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.293720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.619 [2024-10-17 16:58:21.304059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.619 [2024-10-17 16:58:21.304086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.877 [2024-10-17 16:58:21.315126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.877 [2024-10-17 16:58:21.315154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.877 [2024-10-17 16:58:21.331067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.877 [2024-10-17 16:58:21.331095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.877 [2024-10-17 16:58:21.344630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.877 [2024-10-17 16:58:21.344657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.877 [2024-10-17 16:58:21.354140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.877 [2024-10-17 16:58:21.354171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.877 [2024-10-17 16:58:21.366296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.877 [2024-10-17 16:58:21.366323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.877 [2024-10-17 16:58:21.377513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.877 [2024-10-17 16:58:21.377538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.877 [2024-10-17 16:58:21.389153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.877 [2024-10-17 16:58:21.389179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.877 [2024-10-17 16:58:21.399781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.877 [2024-10-17 16:58:21.399807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.877 [2024-10-17 16:58:21.411877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.877 [2024-10-17 16:58:21.411902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.877 [2024-10-17 16:58:21.425899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.878 [2024-10-17 16:58:21.425926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.878 [2024-10-17 16:58:21.435491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.878 [2024-10-17 16:58:21.435519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.878 [2024-10-17 16:58:21.447806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.878 [2024-10-17 16:58:21.447846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.878 [2024-10-17 16:58:21.463677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.878 [2024-10-17 16:58:21.463702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.878 [2024-10-17 16:58:21.476210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.878 [2024-10-17 16:58:21.476237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.878 [2024-10-17 16:58:21.485726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.878 [2024-10-17 16:58:21.485752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.878 [2024-10-17 16:58:21.498278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.878 [2024-10-17 16:58:21.498319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.878 [2024-10-17 16:58:21.509574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.878 [2024-10-17 16:58:21.509600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.878 [2024-10-17 16:58:21.525737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.878 [2024-10-17 16:58:21.525765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.878 [2024-10-17 16:58:21.535574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.878 [2024-10-17 16:58:21.535602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.878 [2024-10-17 16:58:21.547793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.878 [2024-10-17 16:58:21.547818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:07.878 [2024-10-17 16:58:21.560888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:07.878 [2024-10-17 16:58:21.560914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.570540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.570566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.582894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.582921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.594454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.594478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.605616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.605657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.617022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.617057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.628770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.628795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.639827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.639851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.654663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.654689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.664239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.664265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.676466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.676505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.687245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.687271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.703317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.703343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.715811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.715838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.725456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.725482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.737745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.737770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.748606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.748648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.759711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.759737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.772953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.772979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.781947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.781973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.793955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.793981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.805143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.805170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.137 [2024-10-17 16:58:21.816383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.137 [2024-10-17 16:58:21.816411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.395 [2024-10-17 16:58:21.830216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.395 [2024-10-17 16:58:21.830244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.395 [2024-10-17 16:58:21.839917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.395 [2024-10-17 16:58:21.839967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.395 [2024-10-17 16:58:21.852060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.395 [2024-10-17 16:58:21.852088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.395 [2024-10-17 16:58:21.863489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.395 [2024-10-17 16:58:21.863528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.395 [2024-10-17 16:58:21.877955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.395 [2024-10-17 16:58:21.877982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.395 [2024-10-17 16:58:21.887152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.395 [2024-10-17 16:58:21.887179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.395 [2024-10-17 16:58:21.899911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:21.899941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:21.911896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:21.911926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:21.924744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:21.924774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:21.942374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:21.942403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:21.953409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:21.953438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:21.969427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:21.969456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:21.980995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:21.981037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:21.993068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:21.993095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:22.010265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:22.010291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:22.024848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:22.024877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:22.034631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:22.034661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:22.047793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:22.047822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:22.060014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:22.060058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:22.072414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:22.072444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.396 [2024-10-17 16:58:22.084232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.396 [2024-10-17 16:58:22.084266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.096080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.096106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.108378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.108410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.120020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.120064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.131393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.131423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.143660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.143692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.155529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.155559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.167869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.167899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.180999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.181038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 10723.60 IOPS, 83.78 MiB/s [2024-10-17T14:58:22.344Z] [2024-10-17 16:58:22.199065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.199092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.208509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.208538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 00:30:08.654 Latency(us) 00:30:08.654 [2024-10-17T14:58:22.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.654 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:08.654 Nvme1n1 : 5.01 10726.41 83.80 0.00 0.00 11917.62 2961.26 19612.25 00:30:08.654 [2024-10-17T14:58:22.344Z] =================================================================================================================== 00:30:08.654 [2024-10-17T14:58:22.344Z] Total : 10726.41 83.80 0.00 0.00 11917.62 2961.26 19612.25 00:30:08.654 [2024-10-17 16:58:22.216500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.216527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.224498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.224526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.232507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.232538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.240539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.240590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.248545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.248596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.256554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.256606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.264544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.654 [2024-10-17 16:58:22.264592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.654 [2024-10-17 16:58:22.272534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.655 [2024-10-17 16:58:22.272583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.655 [2024-10-17 16:58:22.280550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.655 [2024-10-17 16:58:22.280599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.655 [2024-10-17 16:58:22.288540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.655 [2024-10-17 16:58:22.288586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.655 [2024-10-17 16:58:22.296545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.655 [2024-10-17 16:58:22.296593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.655 [2024-10-17 16:58:22.304542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.655 [2024-10-17 16:58:22.304590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.655 [2024-10-17 16:58:22.312548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.655 [2024-10-17 16:58:22.312598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.655 [2024-10-17 16:58:22.320563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.655 [2024-10-17 16:58:22.320616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.655 [2024-10-17 16:58:22.328544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.655 [2024-10-17 16:58:22.328593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.655 [2024-10-17 16:58:22.336537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.655 [2024-10-17 16:58:22.336584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.655 [2024-10-17 16:58:22.344546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.655 [2024-10-17 16:58:22.344594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.913 [2024-10-17 16:58:22.352538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.913 [2024-10-17 16:58:22.352582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.913 [2024-10-17 16:58:22.360494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.913 [2024-10-17 16:58:22.360517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.913 [2024-10-17 16:58:22.368497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.913 [2024-10-17 16:58:22.368523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.913 [2024-10-17 16:58:22.376494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.913 [2024-10-17 16:58:22.376518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.913 [2024-10-17 16:58:22.384494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.913 [2024-10-17 16:58:22.384517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.913 [2024-10-17 16:58:22.392530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.913 [2024-10-17 16:58:22.392571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.913 [2024-10-17 16:58:22.400543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.913 [2024-10-17 16:58:22.400593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.913 [2024-10-17 16:58:22.408523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.913 [2024-10-17 16:58:22.408563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.913 [2024-10-17 16:58:22.416495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.913 [2024-10-17 16:58:22.416518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.913 [2024-10-17 16:58:22.424492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.913 [2024-10-17 16:58:22.424515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.913 [2024-10-17 16:58:22.432493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:08.913 [2024-10-17 16:58:22.432516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2510600) - No such process 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2510600 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:08.913 delay0 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.913 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:08.913 [2024-10-17 16:58:22.546367] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:17.024 Initializing NVMe Controllers 00:30:17.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:17.024 Initialization complete. Launching workers. 00:30:17.024 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 238, failed: 20327 00:30:17.024 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20441, failed to submit 124 00:30:17.024 success 20366, unsuccessful 75, failed 0 00:30:17.024 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:17.024 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:17.025 rmmod nvme_tcp 00:30:17.025 rmmod nvme_fabrics 00:30:17.025 rmmod nvme_keyring 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2509277 ']' 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2509277 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2509277 ']' 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2509277 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2509277 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2509277' 00:30:17.025 killing process with pid 2509277 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2509277 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2509277 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.025 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.400 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:18.400 00:30:18.400 real 0m28.696s 00:30:18.400 user 0m40.550s 00:30:18.400 sys 0m10.185s 00:30:18.400 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:18.400 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:18.400 ************************************ 00:30:18.400 END TEST nvmf_zcopy 00:30:18.400 ************************************ 00:30:18.400 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:18.400 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:18.400 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:18.400 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:18.400 ************************************ 00:30:18.400 START TEST nvmf_nmic 00:30:18.400 ************************************ 00:30:18.400 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:18.658 * Looking for test storage... 00:30:18.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:18.658 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:18.658 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:30:18.658 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:18.658 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:18.658 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:18.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.659 --rc genhtml_branch_coverage=1 00:30:18.659 --rc genhtml_function_coverage=1 00:30:18.659 --rc genhtml_legend=1 00:30:18.659 --rc geninfo_all_blocks=1 00:30:18.659 --rc geninfo_unexecuted_blocks=1 00:30:18.659 00:30:18.659 ' 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:18.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.659 --rc genhtml_branch_coverage=1 00:30:18.659 --rc genhtml_function_coverage=1 00:30:18.659 --rc genhtml_legend=1 00:30:18.659 --rc geninfo_all_blocks=1 00:30:18.659 --rc geninfo_unexecuted_blocks=1 00:30:18.659 00:30:18.659 ' 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:18.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.659 --rc genhtml_branch_coverage=1 00:30:18.659 --rc genhtml_function_coverage=1 00:30:18.659 --rc genhtml_legend=1 00:30:18.659 --rc geninfo_all_blocks=1 00:30:18.659 --rc geninfo_unexecuted_blocks=1 00:30:18.659 00:30:18.659 ' 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:18.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.659 --rc genhtml_branch_coverage=1 00:30:18.659 --rc genhtml_function_coverage=1 00:30:18.659 --rc genhtml_legend=1 00:30:18.659 --rc geninfo_all_blocks=1 00:30:18.659 --rc geninfo_unexecuted_blocks=1 00:30:18.659 00:30:18.659 ' 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:18.659 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.660 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.660 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.660 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:18.660 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:18.660 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:18.660 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:20.562 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.562 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:30:20.562 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:20.562 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:20.562 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:20.562 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:20.562 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:20.562 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:20.563 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:20.563 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:20.563 Found net devices under 0000:09:00.0: cvl_0_0 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:20.563 Found net devices under 0000:09:00.1: cvl_0_1 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.563 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:20.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:30:20.822 00:30:20.822 --- 10.0.0.2 ping statistics --- 00:30:20.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.822 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:30:20.822 00:30:20.822 --- 10.0.0.1 ping statistics --- 00:30:20.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.822 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2513984 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2513984 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2513984 ']' 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:20.822 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:20.822 [2024-10-17 16:58:34.408307] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:20.822 [2024-10-17 16:58:34.409360] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:30:20.822 [2024-10-17 16:58:34.409418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.822 [2024-10-17 16:58:34.477897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:21.081 [2024-10-17 16:58:34.544470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.081 [2024-10-17 16:58:34.544526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.081 [2024-10-17 16:58:34.544542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.081 [2024-10-17 16:58:34.544557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.081 [2024-10-17 16:58:34.544568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.081 [2024-10-17 16:58:34.546225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.081 [2024-10-17 16:58:34.546256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:21.081 [2024-10-17 16:58:34.546316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:21.081 [2024-10-17 16:58:34.546320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.081 [2024-10-17 16:58:34.643408] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:21.081 [2024-10-17 16:58:34.643661] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:21.081 [2024-10-17 16:58:34.643986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:21.081 [2024-10-17 16:58:34.644643] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:21.081 [2024-10-17 16:58:34.644900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:21.081 [2024-10-17 16:58:34.702731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:21.081 Malloc0 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:21.081 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.082 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:21.082 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.082 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.082 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.082 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:21.082 [2024-10-17 16:58:34.762845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.082 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.082 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:21.082 test case1: single bdev can't be used in multiple subsystems 00:30:21.082 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:21.082 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.082 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:21.340 [2024-10-17 16:58:34.786590] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:21.340 [2024-10-17 16:58:34.786619] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:21.340 [2024-10-17 16:58:34.786640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.340 request: 00:30:21.340 { 00:30:21.340 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:21.340 "namespace": { 00:30:21.340 "bdev_name": "Malloc0", 00:30:21.340 "no_auto_visible": false 00:30:21.340 }, 00:30:21.340 "method": "nvmf_subsystem_add_ns", 00:30:21.340 "req_id": 1 00:30:21.340 } 00:30:21.340 Got JSON-RPC error response 00:30:21.340 response: 00:30:21.340 { 00:30:21.340 "code": -32602, 00:30:21.340 "message": "Invalid parameters" 00:30:21.340 } 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:21.340 Adding namespace failed - expected result. 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:21.340 test case2: host connect to nvmf target in multiple paths 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:21.340 [2024-10-17 16:58:34.794677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.340 16:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:21.598 16:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:21.598 16:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:21.598 16:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:30:21.598 16:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:21.598 16:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:21.598 16:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:30:23.564 16:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:23.564 16:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:23.564 16:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:30:23.564 16:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:30:23.564 16:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:23.564 16:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:30:23.564 16:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:23.564 [global] 00:30:23.564 thread=1 00:30:23.564 invalidate=1 00:30:23.564 rw=write 00:30:23.564 time_based=1 00:30:23.564 runtime=1 00:30:23.564 ioengine=libaio 00:30:23.564 direct=1 00:30:23.564 bs=4096 00:30:23.564 iodepth=1 00:30:23.564 norandommap=0 00:30:23.564 numjobs=1 00:30:23.564 00:30:23.564 verify_dump=1 00:30:23.564 verify_backlog=512 00:30:23.564 verify_state_save=0 00:30:23.564 do_verify=1 00:30:23.564 verify=crc32c-intel 00:30:23.564 [job0] 00:30:23.564 filename=/dev/nvme0n1 00:30:23.820 Could not set queue depth (nvme0n1) 00:30:23.820 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:23.820 fio-3.35 00:30:23.820 Starting 1 thread 00:30:25.193 00:30:25.193 job0: (groupid=0, jobs=1): err= 0: pid=2514410: Thu Oct 17 16:58:38 2024 00:30:25.193 read: IOPS=511, BW=2046KiB/s (2096kB/s)(2116KiB/1034msec) 00:30:25.193 slat (nsec): min=5251, max=33596, avg=12639.34, stdev=5455.51 00:30:25.193 clat (usec): min=211, max=42004, avg=1603.33, stdev=7344.61 00:30:25.193 lat (usec): min=217, max=42018, avg=1615.97, stdev=7344.65 00:30:25.193 clat percentiles (usec): 00:30:25.193 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 245], 00:30:25.193 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 258], 00:30:25.193 | 70.00th=[ 262], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 318], 00:30:25.193 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:25.193 | 99.99th=[42206] 00:30:25.193 write: IOPS=990, BW=3961KiB/s (4056kB/s)(4096KiB/1034msec); 0 zone resets 00:30:25.193 slat (nsec): min=5458, max=55064, avg=10100.79, stdev=6762.93 00:30:25.193 clat (usec): min=139, max=263, avg=159.32, stdev=15.55 00:30:25.193 lat (usec): min=145, max=305, avg=169.43, stdev=20.06 00:30:25.193 clat percentiles (usec): 00:30:25.193 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:30:25.193 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:30:25.193 | 70.00th=[ 163], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 190], 00:30:25.193 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 258], 99.95th=[ 265], 00:30:25.193 | 99.99th=[ 265] 00:30:25.193 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:30:25.193 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:25.193 lat (usec) : 250=77.14%, 500=21.64% 00:30:25.193 lat (msec) : 4=0.13%, 50=1.09% 00:30:25.193 cpu : usr=1.55%, sys=1.74%, ctx=1553, majf=0, minf=1 00:30:25.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:25.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.193 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:25.193 00:30:25.193 Run status group 0 (all jobs): 00:30:25.193 READ: bw=2046KiB/s (2096kB/s), 2046KiB/s-2046KiB/s (2096kB/s-2096kB/s), io=2116KiB (2167kB), run=1034-1034msec 00:30:25.193 WRITE: bw=3961KiB/s (4056kB/s), 3961KiB/s-3961KiB/s (4056kB/s-4056kB/s), io=4096KiB (4194kB), run=1034-1034msec 00:30:25.193 00:30:25.193 Disk stats (read/write): 00:30:25.193 nvme0n1: ios=575/1024, merge=0/0, ticks=746/150, in_queue=896, util=95.79% 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:25.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:25.193 rmmod nvme_tcp 00:30:25.193 rmmod nvme_fabrics 00:30:25.193 rmmod nvme_keyring 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2513984 ']' 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2513984 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2513984 ']' 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2513984 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2513984 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2513984' 00:30:25.193 killing process with pid 2513984 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2513984 00:30:25.193 16:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2513984 00:30:25.761 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:25.761 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:25.762 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:25.762 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:25.762 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:30:25.762 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:25.762 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:30:25.762 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.762 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.762 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.762 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.762 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.665 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.665 00:30:27.665 real 0m9.122s 00:30:27.665 user 0m17.144s 00:30:27.665 sys 0m3.237s 00:30:27.665 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:27.665 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:27.665 ************************************ 00:30:27.665 END TEST nvmf_nmic 00:30:27.665 ************************************ 00:30:27.665 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:27.665 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:27.665 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:27.665 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:27.665 ************************************ 00:30:27.665 START TEST nvmf_fio_target 00:30:27.665 ************************************ 00:30:27.665 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:27.665 * Looking for test storage... 00:30:27.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:27.665 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:27.665 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:30:27.665 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:27.924 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:27.924 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.924 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.924 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:27.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.925 --rc genhtml_branch_coverage=1 00:30:27.925 --rc genhtml_function_coverage=1 00:30:27.925 --rc genhtml_legend=1 00:30:27.925 --rc geninfo_all_blocks=1 00:30:27.925 --rc geninfo_unexecuted_blocks=1 00:30:27.925 00:30:27.925 ' 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:27.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.925 --rc genhtml_branch_coverage=1 00:30:27.925 --rc genhtml_function_coverage=1 00:30:27.925 --rc genhtml_legend=1 00:30:27.925 --rc geninfo_all_blocks=1 00:30:27.925 --rc geninfo_unexecuted_blocks=1 00:30:27.925 00:30:27.925 ' 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:27.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.925 --rc genhtml_branch_coverage=1 00:30:27.925 --rc genhtml_function_coverage=1 00:30:27.925 --rc genhtml_legend=1 00:30:27.925 --rc geninfo_all_blocks=1 00:30:27.925 --rc geninfo_unexecuted_blocks=1 00:30:27.925 00:30:27.925 ' 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:27.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.925 --rc genhtml_branch_coverage=1 00:30:27.925 --rc genhtml_function_coverage=1 00:30:27.925 --rc genhtml_legend=1 00:30:27.925 --rc geninfo_all_blocks=1 00:30:27.925 --rc geninfo_unexecuted_blocks=1 00:30:27.925 00:30:27.925 ' 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.925 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:27.926 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:27.926 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:27.926 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.926 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.926 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.926 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:27.926 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:27.926 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.926 16:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:29.826 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.826 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:29.827 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:29.827 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:29.827 Found net devices under 0000:09:00.0: cvl_0_0 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:29.827 Found net devices under 0000:09:00.1: cvl_0_1 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.827 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:30.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:30.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:30:30.086 00:30:30.086 --- 10.0.0.2 ping statistics --- 00:30:30.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.086 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:30.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:30.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:30:30.086 00:30:30.086 --- 10.0.0.1 ping statistics --- 00:30:30.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.086 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2516564 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2516564 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2516564 ']' 00:30:30.086 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.087 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:30.087 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.087 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:30.087 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.087 [2024-10-17 16:58:43.686736] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:30.087 [2024-10-17 16:58:43.687797] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:30:30.087 [2024-10-17 16:58:43.687842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.087 [2024-10-17 16:58:43.751992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:30.345 [2024-10-17 16:58:43.815941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.345 [2024-10-17 16:58:43.816009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.345 [2024-10-17 16:58:43.816028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.345 [2024-10-17 16:58:43.816041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.345 [2024-10-17 16:58:43.816052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.345 [2024-10-17 16:58:43.817674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.345 [2024-10-17 16:58:43.817743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:30.345 [2024-10-17 16:58:43.817788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.345 [2024-10-17 16:58:43.817791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.345 [2024-10-17 16:58:43.909415] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:30.345 [2024-10-17 16:58:43.909671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:30.345 [2024-10-17 16:58:43.909952] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:30.345 [2024-10-17 16:58:43.910516] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:30.345 [2024-10-17 16:58:43.910775] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:30.345 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:30.345 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:30:30.345 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:30.345 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:30.345 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.345 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.345 16:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:30.604 [2024-10-17 16:58:44.250479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.604 16:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:31.171 16:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:31.171 16:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:31.429 16:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:31.429 16:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:31.688 16:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:31.688 16:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:31.946 16:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:31.946 16:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:32.205 16:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:32.463 16:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:32.463 16:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:32.721 16:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:32.721 16:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:33.287 16:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:33.288 16:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:33.288 16:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:33.852 16:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:33.852 16:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:33.852 16:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:33.852 16:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:34.109 16:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.367 [2024-10-17 16:58:48.030632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.367 16:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:34.624 16:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:35.188 16:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:35.189 16:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:35.189 16:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:30:35.189 16:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:35.189 16:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:30:35.189 16:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:30:35.189 16:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:30:37.087 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:37.087 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:37.087 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:30:37.087 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:30:37.087 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:37.087 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:30:37.087 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:37.344 [global] 00:30:37.344 thread=1 00:30:37.344 invalidate=1 00:30:37.344 rw=write 00:30:37.344 time_based=1 00:30:37.344 runtime=1 00:30:37.344 ioengine=libaio 00:30:37.344 direct=1 00:30:37.344 bs=4096 00:30:37.344 iodepth=1 00:30:37.344 norandommap=0 00:30:37.344 numjobs=1 00:30:37.344 00:30:37.344 verify_dump=1 00:30:37.344 verify_backlog=512 00:30:37.344 verify_state_save=0 00:30:37.344 do_verify=1 00:30:37.344 verify=crc32c-intel 00:30:37.344 [job0] 00:30:37.344 filename=/dev/nvme0n1 00:30:37.344 [job1] 00:30:37.344 filename=/dev/nvme0n2 00:30:37.344 [job2] 00:30:37.344 filename=/dev/nvme0n3 00:30:37.344 [job3] 00:30:37.344 filename=/dev/nvme0n4 00:30:37.344 Could not set queue depth (nvme0n1) 00:30:37.344 Could not set queue depth (nvme0n2) 00:30:37.344 Could not set queue depth (nvme0n3) 00:30:37.344 Could not set queue depth (nvme0n4) 00:30:37.344 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:37.344 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:37.344 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:37.344 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:37.344 fio-3.35 00:30:37.344 Starting 4 threads 00:30:38.718 00:30:38.718 job0: (groupid=0, jobs=1): err= 0: pid=2517513: Thu Oct 17 16:58:52 2024 00:30:38.718 read: IOPS=71, BW=288KiB/s (295kB/s)(292KiB/1015msec) 00:30:38.718 slat (nsec): min=5947, max=33678, avg=12437.11, stdev=10070.13 00:30:38.718 clat (usec): min=232, max=41024, avg=12518.85, stdev=18801.62 00:30:38.718 lat (usec): min=238, max=41049, avg=12531.29, stdev=18810.09 00:30:38.718 clat percentiles (usec): 00:30:38.718 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 245], 00:30:38.718 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 269], 00:30:38.718 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:38.718 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:38.718 | 99.99th=[41157] 00:30:38.718 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:30:38.718 slat (nsec): min=7311, max=41286, avg=9064.84, stdev=2876.28 00:30:38.718 clat (usec): min=153, max=670, avg=182.84, stdev=43.72 00:30:38.718 lat (usec): min=161, max=679, avg=191.91, stdev=44.41 00:30:38.718 clat percentiles (usec): 00:30:38.718 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:30:38.718 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:30:38.718 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 227], 00:30:38.718 | 99.00th=[ 367], 99.50th=[ 603], 99.90th=[ 668], 99.95th=[ 668], 00:30:38.718 | 99.99th=[ 668] 00:30:38.718 bw ( KiB/s): min= 4096, max= 4096, per=29.23%, avg=4096.00, stdev= 0.00, samples=1 00:30:38.718 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:38.718 lat (usec) : 250=90.60%, 500=4.96%, 750=0.68% 00:30:38.718 lat (msec) : 50=3.76% 00:30:38.718 cpu : usr=0.49%, sys=0.59%, ctx=586, majf=0, minf=1 00:30:38.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:38.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.718 issued rwts: total=73,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:38.718 job1: (groupid=0, jobs=1): err= 0: pid=2517514: Thu Oct 17 16:58:52 2024 00:30:38.718 read: IOPS=54, BW=218KiB/s (223kB/s)(220KiB/1009msec) 00:30:38.718 slat (nsec): min=7522, max=38056, avg=22942.56, stdev=8293.49 00:30:38.718 clat (usec): min=251, max=41230, avg=15803.72, stdev=19913.64 00:30:38.718 lat (usec): min=269, max=41248, avg=15826.66, stdev=19910.57 00:30:38.718 clat percentiles (usec): 00:30:38.718 | 1.00th=[ 251], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:30:38.718 | 30.00th=[ 269], 40.00th=[ 306], 50.00th=[ 359], 60.00th=[ 388], 00:30:38.718 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:38.718 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:38.718 | 99.99th=[41157] 00:30:38.718 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:30:38.718 slat (nsec): min=7092, max=34044, avg=10363.01, stdev=2962.35 00:30:38.718 clat (usec): min=162, max=414, avg=255.84, stdev=39.90 00:30:38.718 lat (usec): min=171, max=423, avg=266.20, stdev=40.54 00:30:38.718 clat percentiles (usec): 00:30:38.718 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 192], 20.00th=[ 235], 00:30:38.718 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 269], 00:30:38.718 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:30:38.718 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 416], 99.95th=[ 416], 00:30:38.718 | 99.99th=[ 416] 00:30:38.718 bw ( KiB/s): min= 4096, max= 4096, per=29.23%, avg=4096.00, stdev= 0.00, samples=1 00:30:38.718 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:38.718 lat (usec) : 250=29.81%, 500=66.49% 00:30:38.718 lat (msec) : 50=3.70% 00:30:38.718 cpu : usr=0.60%, sys=0.40%, ctx=570, majf=0, minf=1 00:30:38.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:38.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.718 issued rwts: total=55,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:38.718 job2: (groupid=0, jobs=1): err= 0: pid=2517515: Thu Oct 17 16:58:52 2024 00:30:38.718 read: IOPS=1510, BW=6041KiB/s (6186kB/s)(6156KiB/1019msec) 00:30:38.718 slat (nsec): min=4414, max=48542, avg=13213.35, stdev=5798.89 00:30:38.718 clat (usec): min=223, max=40965, avg=376.06, stdev=1794.12 00:30:38.718 lat (usec): min=229, max=40999, avg=389.28, stdev=1794.79 00:30:38.718 clat percentiles (usec): 00:30:38.718 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 258], 00:30:38.718 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:30:38.718 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 343], 95.00th=[ 367], 00:30:38.718 | 99.00th=[ 437], 99.50th=[ 482], 99.90th=[41157], 99.95th=[41157], 00:30:38.718 | 99.99th=[41157] 00:30:38.718 write: IOPS=2009, BW=8039KiB/s (8232kB/s)(8192KiB/1019msec); 0 zone resets 00:30:38.718 slat (nsec): min=6229, max=70107, avg=13188.25, stdev=6755.94 00:30:38.718 clat (usec): min=157, max=328, avg=185.06, stdev=17.84 00:30:38.718 lat (usec): min=165, max=398, avg=198.25, stdev=22.89 00:30:38.718 clat percentiles (usec): 00:30:38.718 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 169], 00:30:38.718 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:30:38.718 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 215], 00:30:38.718 | 99.00th=[ 225], 99.50th=[ 227], 99.90th=[ 265], 99.95th=[ 326], 00:30:38.718 | 99.99th=[ 330] 00:30:38.718 bw ( KiB/s): min= 8192, max= 8192, per=58.46%, avg=8192.00, stdev= 0.00, samples=2 00:30:38.718 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:30:38.718 lat (usec) : 250=63.59%, 500=36.30%, 750=0.03% 00:30:38.718 lat (msec) : 50=0.08% 00:30:38.718 cpu : usr=2.65%, sys=6.58%, ctx=3587, majf=0, minf=1 00:30:38.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:38.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.718 issued rwts: total=1539,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:38.718 job3: (groupid=0, jobs=1): err= 0: pid=2517516: Thu Oct 17 16:58:52 2024 00:30:38.718 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:30:38.718 slat (nsec): min=8112, max=35677, avg=26362.05, stdev=10363.17 00:30:38.718 clat (usec): min=609, max=42473, avg=39923.97, stdev=8791.14 00:30:38.718 lat (usec): min=618, max=42487, avg=39950.34, stdev=8795.16 00:30:38.718 clat percentiles (usec): 00:30:38.718 | 1.00th=[ 611], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:38.718 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:30:38.718 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:38.718 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:30:38.718 | 99.99th=[42730] 00:30:38.718 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:30:38.718 slat (nsec): min=6671, max=38770, avg=9997.47, stdev=3724.54 00:30:38.718 clat (usec): min=148, max=457, avg=267.99, stdev=48.23 00:30:38.718 lat (usec): min=156, max=467, avg=277.99, stdev=49.16 00:30:38.718 clat percentiles (usec): 00:30:38.718 | 1.00th=[ 153], 5.00th=[ 176], 10.00th=[ 231], 20.00th=[ 243], 00:30:38.718 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:30:38.719 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 383], 00:30:38.719 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 457], 99.95th=[ 457], 00:30:38.719 | 99.99th=[ 457] 00:30:38.719 bw ( KiB/s): min= 4096, max= 4096, per=29.23%, avg=4096.00, stdev= 0.00, samples=1 00:30:38.719 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:38.719 lat (usec) : 250=26.59%, 500=69.29%, 750=0.19% 00:30:38.719 lat (msec) : 50=3.93% 00:30:38.719 cpu : usr=0.20%, sys=0.68%, ctx=534, majf=0, minf=2 00:30:38.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:38.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.719 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:38.719 00:30:38.719 Run status group 0 (all jobs): 00:30:38.719 READ: bw=6604KiB/s (6763kB/s), 86.0KiB/s-6041KiB/s (88.1kB/s-6186kB/s), io=6756KiB (6918kB), run=1009-1023msec 00:30:38.719 WRITE: bw=13.7MiB/s (14.3MB/s), 2002KiB/s-8039KiB/s (2050kB/s-8232kB/s), io=14.0MiB (14.7MB), run=1009-1023msec 00:30:38.719 00:30:38.719 Disk stats (read/write): 00:30:38.719 nvme0n1: ios=118/512, merge=0/0, ticks=730/88, in_queue=818, util=86.77% 00:30:38.719 nvme0n2: ios=80/512, merge=0/0, ticks=956/131, in_queue=1087, util=97.97% 00:30:38.719 nvme0n3: ios=1536/1809, merge=0/0, ticks=435/321, in_queue=756, util=89.02% 00:30:38.719 nvme0n4: ios=17/512, merge=0/0, ticks=672/135, in_queue=807, util=89.67% 00:30:38.719 16:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:38.719 [global] 00:30:38.719 thread=1 00:30:38.719 invalidate=1 00:30:38.719 rw=randwrite 00:30:38.719 time_based=1 00:30:38.719 runtime=1 00:30:38.719 ioengine=libaio 00:30:38.719 direct=1 00:30:38.719 bs=4096 00:30:38.719 iodepth=1 00:30:38.719 norandommap=0 00:30:38.719 numjobs=1 00:30:38.719 00:30:38.719 verify_dump=1 00:30:38.719 verify_backlog=512 00:30:38.719 verify_state_save=0 00:30:38.719 do_verify=1 00:30:38.719 verify=crc32c-intel 00:30:38.719 [job0] 00:30:38.719 filename=/dev/nvme0n1 00:30:38.719 [job1] 00:30:38.719 filename=/dev/nvme0n2 00:30:38.719 [job2] 00:30:38.719 filename=/dev/nvme0n3 00:30:38.719 [job3] 00:30:38.719 filename=/dev/nvme0n4 00:30:38.719 Could not set queue depth (nvme0n1) 00:30:38.719 Could not set queue depth (nvme0n2) 00:30:38.719 Could not set queue depth (nvme0n3) 00:30:38.719 Could not set queue depth (nvme0n4) 00:30:38.976 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:38.977 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:38.977 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:38.977 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:38.977 fio-3.35 00:30:38.977 Starting 4 threads 00:30:40.377 00:30:40.377 job0: (groupid=0, jobs=1): err= 0: pid=2517858: Thu Oct 17 16:58:53 2024 00:30:40.377 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:40.377 slat (nsec): min=5661, max=35587, avg=6870.80, stdev=2259.42 00:30:40.377 clat (usec): min=202, max=1187, avg=246.76, stdev=52.41 00:30:40.377 lat (usec): min=208, max=1197, avg=253.63, stdev=52.80 00:30:40.377 clat percentiles (usec): 00:30:40.377 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 223], 00:30:40.377 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:30:40.377 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 285], 00:30:40.377 | 99.00th=[ 523], 99.50th=[ 545], 99.90th=[ 938], 99.95th=[ 1172], 00:30:40.377 | 99.99th=[ 1188] 00:30:40.377 write: IOPS=2545, BW=9.94MiB/s (10.4MB/s)(9.95MiB/1001msec); 0 zone resets 00:30:40.377 slat (nsec): min=7124, max=38536, avg=8931.51, stdev=2911.44 00:30:40.377 clat (usec): min=137, max=396, avg=175.67, stdev=28.97 00:30:40.377 lat (usec): min=144, max=405, avg=184.60, stdev=29.32 00:30:40.377 clat percentiles (usec): 00:30:40.377 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:30:40.377 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:30:40.377 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 229], 95.00th=[ 245], 00:30:40.377 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 314], 99.95th=[ 388], 00:30:40.377 | 99.99th=[ 396] 00:30:40.377 bw ( KiB/s): min= 9496, max= 9496, per=42.28%, avg=9496.00, stdev= 0.00, samples=1 00:30:40.377 iops : min= 2374, max= 2374, avg=2374.00, stdev= 0.00, samples=1 00:30:40.377 lat (usec) : 250=84.94%, 500=14.56%, 750=0.44%, 1000=0.02% 00:30:40.377 lat (msec) : 2=0.04% 00:30:40.377 cpu : usr=1.90%, sys=5.80%, ctx=4597, majf=0, minf=1 00:30:40.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.377 issued rwts: total=2048,2548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:40.377 job1: (groupid=0, jobs=1): err= 0: pid=2517859: Thu Oct 17 16:58:53 2024 00:30:40.377 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:40.377 slat (nsec): min=5371, max=35823, avg=6522.44, stdev=2090.22 00:30:40.377 clat (usec): min=204, max=41100, avg=291.50, stdev=1290.41 00:30:40.377 lat (usec): min=210, max=41115, avg=298.02, stdev=1290.62 00:30:40.377 clat percentiles (usec): 00:30:40.377 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 225], 00:30:40.377 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:30:40.377 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 285], 95.00th=[ 314], 00:30:40.377 | 99.00th=[ 396], 99.50th=[ 424], 99.90th=[ 9110], 99.95th=[41157], 00:30:40.377 | 99.99th=[41157] 00:30:40.377 write: IOPS=2152, BW=8611KiB/s (8818kB/s)(8620KiB/1001msec); 0 zone resets 00:30:40.377 slat (nsec): min=6907, max=41051, avg=8408.71, stdev=2813.48 00:30:40.377 clat (usec): min=135, max=419, avg=167.82, stdev=16.76 00:30:40.377 lat (usec): min=142, max=426, avg=176.23, stdev=17.45 00:30:40.377 clat percentiles (usec): 00:30:40.377 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:30:40.377 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:30:40.377 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 194], 00:30:40.377 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 260], 99.95th=[ 260], 00:30:40.377 | 99.99th=[ 420] 00:30:40.377 bw ( KiB/s): min=11128, max=11128, per=49.55%, avg=11128.00, stdev= 0.00, samples=1 00:30:40.377 iops : min= 2782, max= 2782, avg=2782.00, stdev= 0.00, samples=1 00:30:40.377 lat (usec) : 250=86.13%, 500=13.73%, 1000=0.02% 00:30:40.377 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02%, 50=0.05% 00:30:40.377 cpu : usr=3.00%, sys=3.80%, ctx=4203, majf=0, minf=1 00:30:40.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.377 issued rwts: total=2048,2155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:40.377 job2: (groupid=0, jobs=1): err= 0: pid=2517861: Thu Oct 17 16:58:53 2024 00:30:40.377 read: IOPS=367, BW=1471KiB/s (1506kB/s)(1472KiB/1001msec) 00:30:40.377 slat (nsec): min=5682, max=34859, avg=8046.55, stdev=4563.07 00:30:40.377 clat (usec): min=227, max=41047, avg=2401.12, stdev=9005.67 00:30:40.377 lat (usec): min=233, max=41063, avg=2409.17, stdev=9006.80 00:30:40.377 clat percentiles (usec): 00:30:40.377 | 1.00th=[ 231], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 269], 00:30:40.377 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 00:30:40.377 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 388], 95.00th=[40633], 00:30:40.377 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:40.377 | 99.99th=[41157] 00:30:40.377 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:30:40.377 slat (nsec): min=7525, max=32921, avg=8662.24, stdev=2547.34 00:30:40.377 clat (usec): min=172, max=306, avg=208.69, stdev=19.38 00:30:40.377 lat (usec): min=180, max=314, avg=217.35, stdev=19.66 00:30:40.377 clat percentiles (usec): 00:30:40.377 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:30:40.377 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:30:40.377 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 243], 00:30:40.377 | 99.00th=[ 262], 99.50th=[ 285], 99.90th=[ 306], 99.95th=[ 306], 00:30:40.377 | 99.99th=[ 306] 00:30:40.377 bw ( KiB/s): min= 4096, max= 4096, per=18.24%, avg=4096.00, stdev= 0.00, samples=1 00:30:40.377 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:40.377 lat (usec) : 250=60.00%, 500=37.84% 00:30:40.377 lat (msec) : 50=2.16% 00:30:40.377 cpu : usr=0.60%, sys=0.90%, ctx=880, majf=0, minf=2 00:30:40.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.377 issued rwts: total=368,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:40.377 job3: (groupid=0, jobs=1): err= 0: pid=2517862: Thu Oct 17 16:58:53 2024 00:30:40.377 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:30:40.377 slat (nsec): min=6915, max=27885, avg=15631.50, stdev=5198.31 00:30:40.377 clat (usec): min=40797, max=41083, avg=40971.92, stdev=65.62 00:30:40.377 lat (usec): min=40803, max=41098, avg=40987.55, stdev=65.03 00:30:40.377 clat percentiles (usec): 00:30:40.377 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:40.377 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:40.377 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:40.377 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:40.377 | 99.99th=[41157] 00:30:40.377 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:30:40.377 slat (nsec): min=7830, max=30339, avg=9162.57, stdev=2361.39 00:30:40.377 clat (usec): min=151, max=285, avg=217.27, stdev=26.06 00:30:40.377 lat (usec): min=158, max=294, avg=226.44, stdev=26.13 00:30:40.377 clat percentiles (usec): 00:30:40.377 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:30:40.377 | 30.00th=[ 196], 40.00th=[ 206], 50.00th=[ 221], 60.00th=[ 229], 00:30:40.377 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 258], 00:30:40.377 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 285], 99.95th=[ 285], 00:30:40.377 | 99.99th=[ 285] 00:30:40.377 bw ( KiB/s): min= 4096, max= 4096, per=18.24%, avg=4096.00, stdev= 0.00, samples=1 00:30:40.377 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:40.377 lat (usec) : 250=86.70%, 500=9.18% 00:30:40.377 lat (msec) : 50=4.12% 00:30:40.377 cpu : usr=0.59%, sys=0.29%, ctx=535, majf=0, minf=1 00:30:40.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.377 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:40.377 00:30:40.377 Run status group 0 (all jobs): 00:30:40.377 READ: bw=17.2MiB/s (18.0MB/s), 86.3KiB/s-8184KiB/s (88.3kB/s-8380kB/s), io=17.5MiB (18.4MB), run=1001-1020msec 00:30:40.378 WRITE: bw=21.9MiB/s (23.0MB/s), 2008KiB/s-9.94MiB/s (2056kB/s-10.4MB/s), io=22.4MiB (23.5MB), run=1001-1020msec 00:30:40.378 00:30:40.378 Disk stats (read/write): 00:30:40.378 nvme0n1: ios=1861/2048, merge=0/0, ticks=1389/350, in_queue=1739, util=98.60% 00:30:40.378 nvme0n2: ios=1702/2048, merge=0/0, ticks=567/318, in_queue=885, util=93.71% 00:30:40.378 nvme0n3: ios=22/512, merge=0/0, ticks=864/101, in_queue=965, util=90.83% 00:30:40.378 nvme0n4: ios=45/512, merge=0/0, ticks=1642/108, in_queue=1750, util=98.22% 00:30:40.378 16:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:40.378 [global] 00:30:40.378 thread=1 00:30:40.378 invalidate=1 00:30:40.378 rw=write 00:30:40.378 time_based=1 00:30:40.378 runtime=1 00:30:40.378 ioengine=libaio 00:30:40.378 direct=1 00:30:40.378 bs=4096 00:30:40.378 iodepth=128 00:30:40.378 norandommap=0 00:30:40.378 numjobs=1 00:30:40.378 00:30:40.378 verify_dump=1 00:30:40.378 verify_backlog=512 00:30:40.378 verify_state_save=0 00:30:40.378 do_verify=1 00:30:40.378 verify=crc32c-intel 00:30:40.378 [job0] 00:30:40.378 filename=/dev/nvme0n1 00:30:40.378 [job1] 00:30:40.378 filename=/dev/nvme0n2 00:30:40.378 [job2] 00:30:40.378 filename=/dev/nvme0n3 00:30:40.378 [job3] 00:30:40.378 filename=/dev/nvme0n4 00:30:40.378 Could not set queue depth (nvme0n1) 00:30:40.378 Could not set queue depth (nvme0n2) 00:30:40.378 Could not set queue depth (nvme0n3) 00:30:40.378 Could not set queue depth (nvme0n4) 00:30:40.378 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:40.378 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:40.378 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:40.378 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:40.378 fio-3.35 00:30:40.378 Starting 4 threads 00:30:41.755 00:30:41.755 job0: (groupid=0, jobs=1): err= 0: pid=2518088: Thu Oct 17 16:58:55 2024 00:30:41.755 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:30:41.755 slat (usec): min=2, max=21456, avg=98.81, stdev=685.50 00:30:41.755 clat (usec): min=3415, max=48307, avg=12871.05, stdev=6231.19 00:30:41.755 lat (usec): min=3430, max=48433, avg=12969.86, stdev=6278.14 00:30:41.755 clat percentiles (usec): 00:30:41.755 | 1.00th=[ 4228], 5.00th=[ 7242], 10.00th=[ 8291], 20.00th=[ 9372], 00:30:41.755 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10814], 60.00th=[11600], 00:30:41.755 | 70.00th=[12387], 80.00th=[14746], 90.00th=[22152], 95.00th=[27132], 00:30:41.755 | 99.00th=[34341], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:30:41.755 | 99.99th=[48497] 00:30:41.755 write: IOPS=4971, BW=19.4MiB/s (20.4MB/s)(19.6MiB/1007msec); 0 zone resets 00:30:41.755 slat (usec): min=4, max=14244, avg=100.68, stdev=685.52 00:30:41.755 clat (usec): min=669, max=67094, avg=13658.18, stdev=9580.55 00:30:41.755 lat (usec): min=1561, max=67109, avg=13758.87, stdev=9643.52 00:30:41.755 clat percentiles (usec): 00:30:41.755 | 1.00th=[ 4113], 5.00th=[ 6915], 10.00th=[ 7898], 20.00th=[ 8717], 00:30:41.755 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10552], 60.00th=[11338], 00:30:41.755 | 70.00th=[12256], 80.00th=[15008], 90.00th=[22676], 95.00th=[33162], 00:30:41.755 | 99.00th=[63177], 99.50th=[64750], 99.90th=[66847], 99.95th=[66847], 00:30:41.755 | 99.99th=[66847] 00:30:41.755 bw ( KiB/s): min=18176, max=20848, per=31.21%, avg=19512.00, stdev=1889.39, samples=2 00:30:41.755 iops : min= 4544, max= 5212, avg=4878.00, stdev=472.35, samples=2 00:30:41.755 lat (usec) : 750=0.01% 00:30:41.755 lat (msec) : 2=0.02%, 4=0.59%, 10=31.08%, 20=57.10%, 50=10.21% 00:30:41.755 lat (msec) : 100=0.98% 00:30:41.755 cpu : usr=6.46%, sys=8.95%, ctx=396, majf=0, minf=2 00:30:41.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:30:41.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:41.755 issued rwts: total=4608,5006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:41.755 job1: (groupid=0, jobs=1): err= 0: pid=2518089: Thu Oct 17 16:58:55 2024 00:30:41.755 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:30:41.755 slat (usec): min=2, max=50653, avg=176.13, stdev=1554.77 00:30:41.755 clat (msec): min=3, max=107, avg=24.00, stdev=17.07 00:30:41.755 lat (msec): min=3, max=107, avg=24.17, stdev=17.14 00:30:41.755 clat percentiles (msec): 00:30:41.755 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 12], 20.00th=[ 14], 00:30:41.755 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 20], 00:30:41.755 | 70.00th=[ 24], 80.00th=[ 32], 90.00th=[ 44], 95.00th=[ 61], 00:30:41.755 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 104], 99.95th=[ 104], 00:30:41.755 | 99.99th=[ 108] 00:30:41.755 write: IOPS=3324, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1007msec); 0 zone resets 00:30:41.755 slat (usec): min=2, max=13404, avg=132.22, stdev=775.22 00:30:41.755 clat (usec): min=3410, max=72903, avg=16077.32, stdev=9047.46 00:30:41.755 lat (usec): min=3655, max=75930, avg=16209.54, stdev=9134.47 00:30:41.755 clat percentiles (usec): 00:30:41.755 | 1.00th=[ 5276], 5.00th=[ 6652], 10.00th=[ 9503], 20.00th=[10945], 00:30:41.755 | 30.00th=[11731], 40.00th=[12780], 50.00th=[14222], 60.00th=[15270], 00:30:41.756 | 70.00th=[16450], 80.00th=[19268], 90.00th=[24249], 95.00th=[26608], 00:30:41.756 | 99.00th=[63701], 99.50th=[65274], 99.90th=[72877], 99.95th=[72877], 00:30:41.756 | 99.99th=[72877] 00:30:41.756 bw ( KiB/s): min=12312, max=13472, per=20.62%, avg=12892.00, stdev=820.24, samples=2 00:30:41.756 iops : min= 3078, max= 3368, avg=3223.00, stdev=205.06, samples=2 00:30:41.756 lat (msec) : 4=0.26%, 10=10.70%, 20=62.18%, 50=21.40%, 100=5.28% 00:30:41.756 lat (msec) : 250=0.17% 00:30:41.756 cpu : usr=2.29%, sys=3.78%, ctx=303, majf=0, minf=1 00:30:41.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:30:41.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:41.756 issued rwts: total=3072,3348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:41.756 job2: (groupid=0, jobs=1): err= 0: pid=2518090: Thu Oct 17 16:58:55 2024 00:30:41.756 read: IOPS=2760, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1004msec) 00:30:41.756 slat (usec): min=2, max=21494, avg=139.26, stdev=1018.28 00:30:41.756 clat (usec): min=904, max=49661, avg=18006.38, stdev=9302.68 00:30:41.756 lat (usec): min=2586, max=49671, avg=18145.64, stdev=9368.88 00:30:41.756 clat percentiles (usec): 00:30:41.756 | 1.00th=[ 2802], 5.00th=[ 6063], 10.00th=[ 9503], 20.00th=[12125], 00:30:41.756 | 30.00th=[12911], 40.00th=[13960], 50.00th=[15008], 60.00th=[16319], 00:30:41.756 | 70.00th=[19006], 80.00th=[23987], 90.00th=[33424], 95.00th=[40109], 00:30:41.756 | 99.00th=[45351], 99.50th=[45876], 99.90th=[49546], 99.95th=[49546], 00:30:41.756 | 99.99th=[49546] 00:30:41.756 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:30:41.756 slat (usec): min=3, max=17089, avg=152.73, stdev=989.08 00:30:41.756 clat (usec): min=1003, max=177771, avg=24904.56, stdev=26684.26 00:30:41.756 lat (usec): min=1022, max=177780, avg=25057.29, stdev=26805.98 00:30:41.756 clat percentiles (usec): 00:30:41.756 | 1.00th=[ 1827], 5.00th=[ 3163], 10.00th=[ 4359], 20.00th=[ 13435], 00:30:41.756 | 30.00th=[ 14615], 40.00th=[ 15533], 50.00th=[ 16450], 60.00th=[ 19006], 00:30:41.756 | 70.00th=[ 23987], 80.00th=[ 34866], 90.00th=[ 42206], 95.00th=[ 53740], 00:30:41.756 | 99.00th=[156238], 99.50th=[162530], 99.90th=[177210], 99.95th=[177210], 00:30:41.756 | 99.99th=[177210] 00:30:41.756 bw ( KiB/s): min=12288, max=12288, per=19.65%, avg=12288.00, stdev= 0.00, samples=2 00:30:41.756 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:30:41.756 lat (usec) : 1000=0.02% 00:30:41.756 lat (msec) : 2=1.27%, 4=4.57%, 10=6.37%, 20=53.46%, 50=31.61% 00:30:41.756 lat (msec) : 100=0.70%, 250=2.02% 00:30:41.756 cpu : usr=2.49%, sys=4.69%, ctx=299, majf=0, minf=1 00:30:41.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:30:41.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:41.756 issued rwts: total=2772,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:41.756 job3: (groupid=0, jobs=1): err= 0: pid=2518091: Thu Oct 17 16:58:55 2024 00:30:41.756 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:30:41.756 slat (usec): min=2, max=17360, avg=110.52, stdev=885.57 00:30:41.756 clat (usec): min=3377, max=37978, avg=14947.19, stdev=5643.74 00:30:41.756 lat (usec): min=3383, max=37995, avg=15057.71, stdev=5683.39 00:30:41.756 clat percentiles (usec): 00:30:41.756 | 1.00th=[ 4293], 5.00th=[ 8029], 10.00th=[ 9634], 20.00th=[11469], 00:30:41.756 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13435], 60.00th=[13698], 00:30:41.756 | 70.00th=[16188], 80.00th=[18744], 90.00th=[22414], 95.00th=[28181], 00:30:41.756 | 99.00th=[31851], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:30:41.756 | 99.99th=[38011] 00:30:41.756 write: IOPS=4284, BW=16.7MiB/s (17.5MB/s)(16.9MiB/1007msec); 0 zone resets 00:30:41.756 slat (usec): min=3, max=23291, avg=109.98, stdev=936.19 00:30:41.756 clat (usec): min=341, max=45463, avg=15333.70, stdev=7561.54 00:30:41.756 lat (usec): min=382, max=55471, avg=15443.67, stdev=7629.64 00:30:41.756 clat percentiles (usec): 00:30:41.756 | 1.00th=[ 1762], 5.00th=[ 5145], 10.00th=[ 7242], 20.00th=[10028], 00:30:41.756 | 30.00th=[11863], 40.00th=[12911], 50.00th=[13304], 60.00th=[13829], 00:30:41.756 | 70.00th=[16450], 80.00th=[22152], 90.00th=[25822], 95.00th=[30016], 00:30:41.756 | 99.00th=[37487], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:30:41.756 | 99.99th=[45351] 00:30:41.756 bw ( KiB/s): min=16096, max=17392, per=26.78%, avg=16744.00, stdev=916.41, samples=2 00:30:41.756 iops : min= 4024, max= 4348, avg=4186.00, stdev=229.10, samples=2 00:30:41.756 lat (usec) : 500=0.02%, 750=0.05%, 1000=0.14% 00:30:41.756 lat (msec) : 2=0.39%, 4=0.89%, 10=14.46%, 20=64.68%, 50=19.36% 00:30:41.756 cpu : usr=3.78%, sys=7.26%, ctx=309, majf=0, minf=1 00:30:41.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:30:41.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:41.756 issued rwts: total=4096,4314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:41.756 00:30:41.756 Run status group 0 (all jobs): 00:30:41.756 READ: bw=56.4MiB/s (59.2MB/s), 10.8MiB/s-17.9MiB/s (11.3MB/s-18.7MB/s), io=56.8MiB (59.6MB), run=1004-1007msec 00:30:41.756 WRITE: bw=61.1MiB/s (64.0MB/s), 12.0MiB/s-19.4MiB/s (12.5MB/s-20.4MB/s), io=61.5MiB (64.5MB), run=1004-1007msec 00:30:41.756 00:30:41.756 Disk stats (read/write): 00:30:41.756 nvme0n1: ios=4144/4573, merge=0/0, ticks=41400/51725, in_queue=93125, util=97.90% 00:30:41.756 nvme0n2: ios=2565/2643, merge=0/0, ticks=27308/17149, in_queue=44457, util=86.50% 00:30:41.756 nvme0n3: ios=2094/2476, merge=0/0, ticks=36355/61424, in_queue=97779, util=99.48% 00:30:41.756 nvme0n4: ios=3338/3584, merge=0/0, ticks=43465/43868, in_queue=87333, util=89.61% 00:30:41.756 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:41.756 [global] 00:30:41.756 thread=1 00:30:41.756 invalidate=1 00:30:41.756 rw=randwrite 00:30:41.756 time_based=1 00:30:41.756 runtime=1 00:30:41.756 ioengine=libaio 00:30:41.756 direct=1 00:30:41.756 bs=4096 00:30:41.756 iodepth=128 00:30:41.756 norandommap=0 00:30:41.756 numjobs=1 00:30:41.756 00:30:41.756 verify_dump=1 00:30:41.756 verify_backlog=512 00:30:41.756 verify_state_save=0 00:30:41.756 do_verify=1 00:30:41.756 verify=crc32c-intel 00:30:41.756 [job0] 00:30:41.756 filename=/dev/nvme0n1 00:30:41.756 [job1] 00:30:41.756 filename=/dev/nvme0n2 00:30:41.756 [job2] 00:30:41.756 filename=/dev/nvme0n3 00:30:41.756 [job3] 00:30:41.756 filename=/dev/nvme0n4 00:30:41.756 Could not set queue depth (nvme0n1) 00:30:41.756 Could not set queue depth (nvme0n2) 00:30:41.756 Could not set queue depth (nvme0n3) 00:30:41.756 Could not set queue depth (nvme0n4) 00:30:41.756 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:41.756 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:41.756 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:41.756 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:41.756 fio-3.35 00:30:41.756 Starting 4 threads 00:30:43.136 00:30:43.136 job0: (groupid=0, jobs=1): err= 0: pid=2518320: Thu Oct 17 16:58:56 2024 00:30:43.136 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:30:43.136 slat (usec): min=3, max=13966, avg=114.59, stdev=803.01 00:30:43.136 clat (usec): min=3449, max=55998, avg=13892.79, stdev=5998.44 00:30:43.136 lat (usec): min=3457, max=56005, avg=14007.38, stdev=6069.25 00:30:43.136 clat percentiles (usec): 00:30:43.136 | 1.00th=[ 5735], 5.00th=[ 7504], 10.00th=[ 8586], 20.00th=[10552], 00:30:43.136 | 30.00th=[10945], 40.00th=[11338], 50.00th=[12387], 60.00th=[14484], 00:30:43.136 | 70.00th=[14877], 80.00th=[16188], 90.00th=[19006], 95.00th=[23725], 00:30:43.136 | 99.00th=[42730], 99.50th=[48497], 99.90th=[55837], 99.95th=[55837], 00:30:43.136 | 99.99th=[55837] 00:30:43.136 write: IOPS=3891, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1004msec); 0 zone resets 00:30:43.136 slat (usec): min=4, max=11420, avg=128.92, stdev=731.74 00:30:43.136 clat (usec): min=478, max=84280, avg=19842.74, stdev=15789.16 00:30:43.136 lat (usec): min=497, max=84296, avg=19971.66, stdev=15875.13 00:30:43.136 clat percentiles (usec): 00:30:43.136 | 1.00th=[ 1074], 5.00th=[ 2704], 10.00th=[ 4424], 20.00th=[ 8029], 00:30:43.136 | 30.00th=[10552], 40.00th=[12649], 50.00th=[13960], 60.00th=[15008], 00:30:43.136 | 70.00th=[22414], 80.00th=[35914], 90.00th=[43779], 95.00th=[45351], 00:30:43.136 | 99.00th=[78119], 99.50th=[81265], 99.90th=[83362], 99.95th=[83362], 00:30:43.136 | 99.99th=[84411] 00:30:43.136 bw ( KiB/s): min=12288, max=17944, per=24.44%, avg=15116.00, stdev=3999.40, samples=2 00:30:43.136 iops : min= 3072, max= 4486, avg=3779.00, stdev=999.85, samples=2 00:30:43.136 lat (usec) : 500=0.04%, 750=0.11%, 1000=0.17% 00:30:43.136 lat (msec) : 2=1.15%, 4=2.78%, 10=17.82%, 20=55.85%, 50=20.52% 00:30:43.136 lat (msec) : 100=1.56% 00:30:43.136 cpu : usr=3.39%, sys=4.99%, ctx=367, majf=0, minf=1 00:30:43.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:43.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:43.136 issued rwts: total=3584,3907,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:43.136 job1: (groupid=0, jobs=1): err= 0: pid=2518321: Thu Oct 17 16:58:56 2024 00:30:43.136 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:30:43.136 slat (nsec): min=1936, max=12457k, avg=97576.81, stdev=664150.85 00:30:43.136 clat (usec): min=1565, max=55725, avg=12340.15, stdev=6005.00 00:30:43.136 lat (usec): min=1572, max=55807, avg=12437.72, stdev=6056.00 00:30:43.136 clat percentiles (usec): 00:30:43.136 | 1.00th=[ 2376], 5.00th=[ 4359], 10.00th=[ 6849], 20.00th=[ 8717], 00:30:43.136 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[12649], 00:30:43.136 | 70.00th=[13173], 80.00th=[14484], 90.00th=[17695], 95.00th=[19530], 00:30:43.136 | 99.00th=[41157], 99.50th=[50594], 99.90th=[55837], 99.95th=[55837], 00:30:43.136 | 99.99th=[55837] 00:30:43.136 write: IOPS=4227, BW=16.5MiB/s (17.3MB/s)(16.7MiB/1012msec); 0 zone resets 00:30:43.136 slat (usec): min=2, max=11034, avg=126.27, stdev=672.98 00:30:43.136 clat (usec): min=321, max=67891, avg=18220.63, stdev=14086.92 00:30:43.136 lat (usec): min=361, max=67896, avg=18346.91, stdev=14179.56 00:30:43.136 clat percentiles (usec): 00:30:43.136 | 1.00th=[ 2180], 5.00th=[ 6587], 10.00th=[ 8455], 20.00th=[10552], 00:30:43.136 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:30:43.136 | 70.00th=[13435], 80.00th=[24511], 90.00th=[44303], 95.00th=[48497], 00:30:43.136 | 99.00th=[55837], 99.50th=[58459], 99.90th=[64750], 99.95th=[64750], 00:30:43.136 | 99.99th=[67634] 00:30:43.136 bw ( KiB/s): min=13728, max=19480, per=26.84%, avg=16604.00, stdev=4067.28, samples=2 00:30:43.136 iops : min= 3432, max= 4870, avg=4151.00, stdev=1016.82, samples=2 00:30:43.136 lat (usec) : 500=0.02%, 1000=0.14% 00:30:43.136 lat (msec) : 2=0.31%, 4=2.19%, 10=18.94%, 20=64.40%, 50=11.93% 00:30:43.136 lat (msec) : 100=2.07% 00:30:43.136 cpu : usr=3.76%, sys=7.91%, ctx=458, majf=0, minf=1 00:30:43.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:43.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:43.136 issued rwts: total=4096,4278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:43.136 job2: (groupid=0, jobs=1): err= 0: pid=2518322: Thu Oct 17 16:58:56 2024 00:30:43.136 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:30:43.136 slat (usec): min=2, max=14233, avg=132.55, stdev=745.94 00:30:43.136 clat (usec): min=7933, max=56983, avg=17579.25, stdev=6132.14 00:30:43.136 lat (usec): min=7944, max=56995, avg=17711.80, stdev=6185.30 00:30:43.136 clat percentiles (usec): 00:30:43.136 | 1.00th=[ 9503], 5.00th=[11863], 10.00th=[13173], 20.00th=[14877], 00:30:43.136 | 30.00th=[16057], 40.00th=[16450], 50.00th=[16712], 60.00th=[16909], 00:30:43.136 | 70.00th=[17171], 80.00th=[17695], 90.00th=[21627], 95.00th=[25035], 00:30:43.136 | 99.00th=[51119], 99.50th=[53740], 99.90th=[54264], 99.95th=[56886], 00:30:43.136 | 99.99th=[56886] 00:30:43.136 write: IOPS=3187, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1007msec); 0 zone resets 00:30:43.136 slat (usec): min=3, max=23484, avg=175.48, stdev=1105.54 00:30:43.136 clat (usec): min=6342, max=68239, avg=22560.89, stdev=12134.78 00:30:43.136 lat (usec): min=7752, max=68283, avg=22736.37, stdev=12234.20 00:30:43.136 clat percentiles (usec): 00:30:43.136 | 1.00th=[10814], 5.00th=[11338], 10.00th=[12649], 20.00th=[13829], 00:30:43.137 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16450], 60.00th=[17957], 00:30:43.137 | 70.00th=[22676], 80.00th=[33162], 90.00th=[43254], 95.00th=[48497], 00:30:43.137 | 99.00th=[55837], 99.50th=[55837], 99.90th=[57934], 99.95th=[60031], 00:30:43.137 | 99.99th=[68682] 00:30:43.137 bw ( KiB/s): min=11470, max=13216, per=19.95%, avg=12343.00, stdev=1234.61, samples=2 00:30:43.137 iops : min= 2867, max= 3304, avg=3085.50, stdev=309.01, samples=2 00:30:43.137 lat (msec) : 10=1.32%, 20=73.50%, 50=22.51%, 100=2.67% 00:30:43.137 cpu : usr=2.98%, sys=6.26%, ctx=282, majf=0, minf=1 00:30:43.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:30:43.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:43.137 issued rwts: total=3072,3210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:43.137 job3: (groupid=0, jobs=1): err= 0: pid=2518323: Thu Oct 17 16:58:56 2024 00:30:43.137 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:30:43.137 slat (usec): min=3, max=18569, avg=115.99, stdev=737.54 00:30:43.137 clat (usec): min=9003, max=51983, avg=15710.41, stdev=6603.71 00:30:43.137 lat (usec): min=9016, max=52002, avg=15826.40, stdev=6649.92 00:30:43.137 clat percentiles (usec): 00:30:43.137 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11338], 20.00th=[12256], 00:30:43.137 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13566], 60.00th=[14353], 00:30:43.137 | 70.00th=[15401], 80.00th=[16188], 90.00th=[23987], 95.00th=[32637], 00:30:43.137 | 99.00th=[40633], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:30:43.137 | 99.99th=[52167] 00:30:43.137 write: IOPS=4229, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1006msec); 0 zone resets 00:30:43.137 slat (usec): min=5, max=21593, avg=110.56, stdev=800.36 00:30:43.137 clat (usec): min=5710, max=60307, avg=14536.31, stdev=6021.61 00:30:43.137 lat (usec): min=6268, max=60358, avg=14646.87, stdev=6091.31 00:30:43.137 clat percentiles (usec): 00:30:43.137 | 1.00th=[ 8291], 5.00th=[10552], 10.00th=[11600], 20.00th=[11994], 00:30:43.137 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[13304], 00:30:43.137 | 70.00th=[13960], 80.00th=[14353], 90.00th=[16909], 95.00th=[28181], 00:30:43.137 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:30:43.137 | 99.99th=[60556] 00:30:43.137 bw ( KiB/s): min=16384, max=16640, per=26.69%, avg=16512.00, stdev=181.02, samples=2 00:30:43.137 iops : min= 4096, max= 4160, avg=4128.00, stdev=45.25, samples=2 00:30:43.137 lat (msec) : 10=2.24%, 20=87.27%, 50=10.45%, 100=0.04% 00:30:43.137 cpu : usr=6.47%, sys=11.64%, ctx=298, majf=0, minf=1 00:30:43.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:43.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:43.137 issued rwts: total=4096,4255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:43.137 00:30:43.137 Run status group 0 (all jobs): 00:30:43.137 READ: bw=57.3MiB/s (60.1MB/s), 11.9MiB/s-15.9MiB/s (12.5MB/s-16.7MB/s), io=58.0MiB (60.8MB), run=1004-1012msec 00:30:43.137 WRITE: bw=60.4MiB/s (63.3MB/s), 12.5MiB/s-16.5MiB/s (13.1MB/s-17.3MB/s), io=61.1MiB (64.1MB), run=1004-1012msec 00:30:43.137 00:30:43.137 Disk stats (read/write): 00:30:43.137 nvme0n1: ios=2680/3072, merge=0/0, ticks=39329/67280, in_queue=106609, util=98.90% 00:30:43.137 nvme0n2: ios=3599/3855, merge=0/0, ticks=30178/49229, in_queue=79407, util=87.11% 00:30:43.137 nvme0n3: ios=2586/2911, merge=0/0, ticks=13742/20275, in_queue=34017, util=95.42% 00:30:43.137 nvme0n4: ios=3388/3584, merge=0/0, ticks=26508/24239, in_queue=50747, util=98.32% 00:30:43.137 16:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:43.137 16:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2518455 00:30:43.137 16:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:43.137 16:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:43.137 [global] 00:30:43.137 thread=1 00:30:43.137 invalidate=1 00:30:43.137 rw=read 00:30:43.137 time_based=1 00:30:43.137 runtime=10 00:30:43.137 ioengine=libaio 00:30:43.137 direct=1 00:30:43.137 bs=4096 00:30:43.137 iodepth=1 00:30:43.137 norandommap=1 00:30:43.137 numjobs=1 00:30:43.137 00:30:43.137 [job0] 00:30:43.137 filename=/dev/nvme0n1 00:30:43.137 [job1] 00:30:43.137 filename=/dev/nvme0n2 00:30:43.137 [job2] 00:30:43.137 filename=/dev/nvme0n3 00:30:43.137 [job3] 00:30:43.137 filename=/dev/nvme0n4 00:30:43.137 Could not set queue depth (nvme0n1) 00:30:43.137 Could not set queue depth (nvme0n2) 00:30:43.137 Could not set queue depth (nvme0n3) 00:30:43.137 Could not set queue depth (nvme0n4) 00:30:43.396 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:43.396 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:43.396 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:43.396 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:43.396 fio-3.35 00:30:43.396 Starting 4 threads 00:30:45.926 16:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:30:46.491 16:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:30:46.491 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=17960960, buflen=4096 00:30:46.491 fio: pid=2518633, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:46.750 16:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:46.750 16:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:30:46.750 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43278336, buflen=4096 00:30:46.750 fio: pid=2518622, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:47.008 16:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:47.008 16:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:30:47.008 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=442368, buflen=4096 00:30:47.008 fio: pid=2518576, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:47.267 16:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:47.267 16:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:30:47.267 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=8941568, buflen=4096 00:30:47.267 fio: pid=2518594, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:47.267 00:30:47.267 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2518576: Thu Oct 17 16:59:00 2024 00:30:47.267 read: IOPS=30, BW=121KiB/s (124kB/s)(432KiB/3572msec) 00:30:47.267 slat (usec): min=10, max=26872, avg=366.05, stdev=2766.13 00:30:47.267 clat (usec): min=349, max=63765, avg=32478.55, stdev=17428.34 00:30:47.267 lat (usec): min=376, max=68941, avg=32847.85, stdev=17832.44 00:30:47.267 clat percentiles (usec): 00:30:47.267 | 1.00th=[ 355], 5.00th=[ 371], 10.00th=[ 404], 20.00th=[ 529], 00:30:47.267 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:47.267 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:30:47.267 | 99.00th=[58459], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:30:47.267 | 99.99th=[63701] 00:30:47.267 bw ( KiB/s): min= 96, max= 192, per=0.72%, avg=128.00, stdev=39.52, samples=6 00:30:47.267 iops : min= 24, max= 48, avg=32.00, stdev= 9.88, samples=6 00:30:47.267 lat (usec) : 500=18.35%, 750=3.67% 00:30:47.267 lat (msec) : 50=75.23%, 100=1.83% 00:30:47.267 cpu : usr=0.06%, sys=0.06%, ctx=111, majf=0, minf=1 00:30:47.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:47.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.267 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.267 issued rwts: total=109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:47.267 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2518594: Thu Oct 17 16:59:00 2024 00:30:47.267 read: IOPS=565, BW=2263KiB/s (2317kB/s)(8732KiB/3859msec) 00:30:47.267 slat (usec): min=5, max=13404, avg=27.33, stdev=442.32 00:30:47.267 clat (usec): min=211, max=41242, avg=1726.74, stdev=7451.42 00:30:47.267 lat (usec): min=218, max=54471, avg=1754.08, stdev=7501.66 00:30:47.267 clat percentiles (usec): 00:30:47.267 | 1.00th=[ 223], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 269], 00:30:47.267 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 322], 00:30:47.267 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 379], 95.00th=[ 420], 00:30:47.267 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:47.267 | 99.99th=[41157] 00:30:47.267 bw ( KiB/s): min= 96, max= 8504, per=13.89%, avg=2483.86, stdev=4060.87, samples=7 00:30:47.267 iops : min= 24, max= 2126, avg=620.86, stdev=1015.29, samples=7 00:30:47.267 lat (usec) : 250=11.03%, 500=85.21%, 750=0.09% 00:30:47.267 lat (msec) : 2=0.14%, 50=3.48% 00:30:47.267 cpu : usr=0.52%, sys=0.80%, ctx=2188, majf=0, minf=1 00:30:47.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:47.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.267 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.267 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:47.267 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2518622: Thu Oct 17 16:59:00 2024 00:30:47.267 read: IOPS=3236, BW=12.6MiB/s (13.3MB/s)(41.3MiB/3265msec) 00:30:47.267 slat (usec): min=4, max=6883, avg=11.14, stdev=67.22 00:30:47.267 clat (usec): min=202, max=63044, avg=293.30, stdev=1176.47 00:30:47.267 lat (usec): min=207, max=63058, avg=304.44, stdev=1201.08 00:30:47.267 clat percentiles (usec): 00:30:47.267 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:30:47.267 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 243], 00:30:47.267 | 70.00th=[ 255], 80.00th=[ 297], 90.00th=[ 371], 95.00th=[ 424], 00:30:47.267 | 99.00th=[ 494], 99.50th=[ 529], 99.90th=[ 2343], 99.95th=[41157], 00:30:47.267 | 99.99th=[46400] 00:30:47.267 bw ( KiB/s): min= 8648, max=17104, per=78.78%, avg=14080.00, stdev=3297.26, samples=6 00:30:47.267 iops : min= 2162, max= 4276, avg=3520.00, stdev=824.31, samples=6 00:30:47.267 lat (usec) : 250=67.28%, 500=31.87%, 750=0.74% 00:30:47.267 lat (msec) : 4=0.01%, 10=0.03%, 50=0.06%, 100=0.01% 00:30:47.267 cpu : usr=1.32%, sys=4.26%, ctx=10568, majf=0, minf=2 00:30:47.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:47.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.267 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.267 issued rwts: total=10567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:47.267 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2518633: Thu Oct 17 16:59:00 2024 00:30:47.267 read: IOPS=1492, BW=5970KiB/s (6113kB/s)(17.1MiB/2938msec) 00:30:47.267 slat (nsec): min=4436, max=59626, avg=12159.04, stdev=5961.38 00:30:47.267 clat (usec): min=216, max=41090, avg=648.46, stdev=3784.93 00:30:47.267 lat (usec): min=223, max=41103, avg=660.62, stdev=3785.79 00:30:47.267 clat percentiles (usec): 00:30:47.267 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 00:30:47.267 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:30:47.267 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 343], 95.00th=[ 383], 00:30:47.267 | 99.00th=[ 510], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:47.267 | 99.99th=[41157] 00:30:47.267 bw ( KiB/s): min= 96, max=13456, per=28.69%, avg=5128.00, stdev=5962.35, samples=5 00:30:47.267 iops : min= 24, max= 3364, avg=1282.00, stdev=1490.59, samples=5 00:30:47.267 lat (usec) : 250=7.25%, 500=91.66%, 750=0.16% 00:30:47.267 lat (msec) : 2=0.02%, 50=0.89% 00:30:47.267 cpu : usr=0.92%, sys=2.01%, ctx=4386, majf=0, minf=1 00:30:47.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:47.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.267 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.267 issued rwts: total=4386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:47.267 00:30:47.267 Run status group 0 (all jobs): 00:30:47.267 READ: bw=17.5MiB/s (18.3MB/s), 121KiB/s-12.6MiB/s (124kB/s-13.3MB/s), io=67.4MiB (70.6MB), run=2938-3859msec 00:30:47.267 00:30:47.267 Disk stats (read/write): 00:30:47.267 nvme0n1: ios=103/0, merge=0/0, ticks=3300/0, in_queue=3300, util=95.11% 00:30:47.267 nvme0n2: ios=2184/0, merge=0/0, ticks=3763/0, in_queue=3763, util=95.73% 00:30:47.267 nvme0n3: ios=10563/0, merge=0/0, ticks=2875/0, in_queue=2875, util=96.57% 00:30:47.267 nvme0n4: ios=4199/0, merge=0/0, ticks=2736/0, in_queue=2736, util=96.74% 00:30:47.525 16:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:47.525 16:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:30:47.784 16:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:47.784 16:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:30:48.042 16:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:48.042 16:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:30:48.300 16:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:48.300 16:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:30:48.558 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:30:48.558 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2518455 00:30:48.558 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:30:48.558 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:48.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:48.817 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:48.817 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:30:48.817 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:30:48.817 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:48.817 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:30:48.817 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:48.817 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:30:48.817 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:30:48.817 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:30:48.817 nvmf hotplug test: fio failed as expected 00:30:48.817 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:49.077 rmmod nvme_tcp 00:30:49.077 rmmod nvme_fabrics 00:30:49.077 rmmod nvme_keyring 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2516564 ']' 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2516564 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2516564 ']' 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2516564 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2516564 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2516564' 00:30:49.077 killing process with pid 2516564 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2516564 00:30:49.077 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2516564 00:30:49.336 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:49.336 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:49.336 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:49.336 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:30:49.336 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:30:49.336 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:49.336 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:30:49.336 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:49.336 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:49.336 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.336 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.336 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:51.871 00:30:51.871 real 0m23.773s 00:30:51.871 user 1m7.261s 00:30:51.871 sys 0m10.085s 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.871 ************************************ 00:30:51.871 END TEST nvmf_fio_target 00:30:51.871 ************************************ 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:51.871 ************************************ 00:30:51.871 START TEST nvmf_bdevio 00:30:51.871 ************************************ 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:51.871 * Looking for test storage... 00:30:51.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:51.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.871 --rc genhtml_branch_coverage=1 00:30:51.871 --rc genhtml_function_coverage=1 00:30:51.871 --rc genhtml_legend=1 00:30:51.871 --rc geninfo_all_blocks=1 00:30:51.871 --rc geninfo_unexecuted_blocks=1 00:30:51.871 00:30:51.871 ' 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:51.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.871 --rc genhtml_branch_coverage=1 00:30:51.871 --rc genhtml_function_coverage=1 00:30:51.871 --rc genhtml_legend=1 00:30:51.871 --rc geninfo_all_blocks=1 00:30:51.871 --rc geninfo_unexecuted_blocks=1 00:30:51.871 00:30:51.871 ' 00:30:51.871 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:51.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.871 --rc genhtml_branch_coverage=1 00:30:51.871 --rc genhtml_function_coverage=1 00:30:51.871 --rc genhtml_legend=1 00:30:51.871 --rc geninfo_all_blocks=1 00:30:51.872 --rc geninfo_unexecuted_blocks=1 00:30:51.872 00:30:51.872 ' 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:51.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.872 --rc genhtml_branch_coverage=1 00:30:51.872 --rc genhtml_function_coverage=1 00:30:51.872 --rc genhtml_legend=1 00:30:51.872 --rc geninfo_all_blocks=1 00:30:51.872 --rc geninfo_unexecuted_blocks=1 00:30:51.872 00:30:51.872 ' 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:30:51.872 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:53.881 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:53.881 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:30:53.881 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:53.881 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:53.881 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:53.881 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:53.881 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:53.881 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:30:53.881 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:53.881 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:30:53.881 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:53.882 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:53.882 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:53.882 Found net devices under 0000:09:00.0: cvl_0_0 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:53.882 Found net devices under 0000:09:00.1: cvl_0_1 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:53.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:30:53.882 00:30:53.882 --- 10.0.0.2 ping statistics --- 00:30:53.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.882 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:53.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:30:53.882 00:30:53.882 --- 10.0.0.1 ping statistics --- 00:30:53.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.882 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:53.882 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:53.883 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:53.883 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:53.883 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2521303 00:30:53.883 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:30:53.883 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2521303 00:30:53.883 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2521303 ']' 00:30:53.883 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.883 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:53.883 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.883 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:53.883 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:54.140 [2024-10-17 16:59:07.578957] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:54.140 [2024-10-17 16:59:07.580067] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:30:54.140 [2024-10-17 16:59:07.580121] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.140 [2024-10-17 16:59:07.647261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:54.140 [2024-10-17 16:59:07.713026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.140 [2024-10-17 16:59:07.713086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.140 [2024-10-17 16:59:07.713112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:54.140 [2024-10-17 16:59:07.713133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:54.140 [2024-10-17 16:59:07.713151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.140 [2024-10-17 16:59:07.714938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:54.140 [2024-10-17 16:59:07.715016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:54.140 [2024-10-17 16:59:07.715072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:54.140 [2024-10-17 16:59:07.715076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:54.140 [2024-10-17 16:59:07.805557] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:54.140 [2024-10-17 16:59:07.805809] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:54.140 [2024-10-17 16:59:07.806105] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:54.140 [2024-10-17 16:59:07.806690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:54.140 [2024-10-17 16:59:07.806977] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:54.398 [2024-10-17 16:59:07.887881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:54.398 Malloc0 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:54.398 [2024-10-17 16:59:07.956045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:54.398 { 00:30:54.398 "params": { 00:30:54.398 "name": "Nvme$subsystem", 00:30:54.398 "trtype": "$TEST_TRANSPORT", 00:30:54.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:54.398 "adrfam": "ipv4", 00:30:54.398 "trsvcid": "$NVMF_PORT", 00:30:54.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:54.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:54.398 "hdgst": ${hdgst:-false}, 00:30:54.398 "ddgst": ${ddgst:-false} 00:30:54.398 }, 00:30:54.398 "method": "bdev_nvme_attach_controller" 00:30:54.398 } 00:30:54.398 EOF 00:30:54.398 )") 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:30:54.398 16:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:54.398 "params": { 00:30:54.398 "name": "Nvme1", 00:30:54.398 "trtype": "tcp", 00:30:54.398 "traddr": "10.0.0.2", 00:30:54.398 "adrfam": "ipv4", 00:30:54.398 "trsvcid": "4420", 00:30:54.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:54.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:54.398 "hdgst": false, 00:30:54.398 "ddgst": false 00:30:54.398 }, 00:30:54.398 "method": "bdev_nvme_attach_controller" 00:30:54.398 }' 00:30:54.398 [2024-10-17 16:59:08.002546] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:30:54.398 [2024-10-17 16:59:08.002624] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521327 ] 00:30:54.399 [2024-10-17 16:59:08.062570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:54.658 [2024-10-17 16:59:08.124775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.658 [2024-10-17 16:59:08.124824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:54.658 [2024-10-17 16:59:08.124829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.658 I/O targets: 00:30:54.658 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:54.658 00:30:54.658 00:30:54.658 CUnit - A unit testing framework for C - Version 2.1-3 00:30:54.658 http://cunit.sourceforge.net/ 00:30:54.658 00:30:54.658 00:30:54.658 Suite: bdevio tests on: Nvme1n1 00:30:54.918 Test: blockdev write read block ...passed 00:30:54.918 Test: blockdev write zeroes read block ...passed 00:30:54.918 Test: blockdev write zeroes read no split ...passed 00:30:54.918 Test: blockdev write zeroes read split ...passed 00:30:54.918 Test: blockdev write zeroes read split partial ...passed 00:30:54.918 Test: blockdev reset ...[2024-10-17 16:59:08.525983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.918 [2024-10-17 16:59:08.526100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217d700 (9): Bad file descriptor 00:30:54.918 [2024-10-17 16:59:08.571274] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:54.918 passed 00:30:54.918 Test: blockdev write read 8 blocks ...passed 00:30:55.179 Test: blockdev write read size > 128k ...passed 00:30:55.179 Test: blockdev write read invalid size ...passed 00:30:55.179 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:55.179 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:55.179 Test: blockdev write read max offset ...passed 00:30:55.179 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:55.179 Test: blockdev writev readv 8 blocks ...passed 00:30:55.179 Test: blockdev writev readv 30 x 1block ...passed 00:30:55.179 Test: blockdev writev readv block ...passed 00:30:55.179 Test: blockdev writev readv size > 128k ...passed 00:30:55.179 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:55.179 Test: blockdev comparev and writev ...[2024-10-17 16:59:08.827167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:55.179 [2024-10-17 16:59:08.827202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:55.179 [2024-10-17 16:59:08.827227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:55.179 [2024-10-17 16:59:08.827245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:55.179 [2024-10-17 16:59:08.827636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:55.179 [2024-10-17 16:59:08.827660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:55.179 [2024-10-17 16:59:08.827681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:55.179 [2024-10-17 16:59:08.827697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:55.179 [2024-10-17 16:59:08.828069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:55.179 [2024-10-17 16:59:08.828093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:55.179 [2024-10-17 16:59:08.828115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:55.179 [2024-10-17 16:59:08.828130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:55.179 [2024-10-17 16:59:08.828506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:55.179 [2024-10-17 16:59:08.828531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:55.179 [2024-10-17 16:59:08.828554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:55.179 [2024-10-17 16:59:08.828569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:55.438 passed 00:30:55.438 Test: blockdev nvme passthru rw ...passed 00:30:55.438 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:59:08.912251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:55.438 [2024-10-17 16:59:08.912285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:55.438 [2024-10-17 16:59:08.912435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:55.438 [2024-10-17 16:59:08.912459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:55.438 [2024-10-17 16:59:08.912605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:55.438 [2024-10-17 16:59:08.912628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:55.438 [2024-10-17 16:59:08.912771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:55.438 [2024-10-17 16:59:08.912794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:55.438 passed 00:30:55.438 Test: blockdev nvme admin passthru ...passed 00:30:55.438 Test: blockdev copy ...passed 00:30:55.438 00:30:55.438 Run Summary: Type Total Ran Passed Failed Inactive 00:30:55.438 suites 1 1 n/a 0 0 00:30:55.438 tests 23 23 23 0 0 00:30:55.438 asserts 152 152 152 0 n/a 00:30:55.438 00:30:55.438 Elapsed time = 1.263 seconds 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:55.697 rmmod nvme_tcp 00:30:55.697 rmmod nvme_fabrics 00:30:55.697 rmmod nvme_keyring 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2521303 ']' 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2521303 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2521303 ']' 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2521303 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2521303 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2521303' 00:30:55.697 killing process with pid 2521303 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2521303 00:30:55.697 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2521303 00:30:55.956 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:55.956 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:55.956 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:55.956 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:55.956 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:30:55.956 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:55.956 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:30:55.956 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:55.956 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:55.956 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.956 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.956 16:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.862 16:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:57.862 00:30:57.862 real 0m6.475s 00:30:57.862 user 0m8.624s 00:30:57.862 sys 0m2.534s 00:30:57.862 16:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:58.120 16:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:58.120 ************************************ 00:30:58.120 END TEST nvmf_bdevio 00:30:58.120 ************************************ 00:30:58.120 16:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:58.120 00:30:58.120 real 3m53.988s 00:30:58.120 user 8m51.039s 00:30:58.120 sys 1m24.166s 00:30:58.120 16:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:58.120 16:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:58.120 ************************************ 00:30:58.120 END TEST nvmf_target_core_interrupt_mode 00:30:58.120 ************************************ 00:30:58.120 16:59:11 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:58.120 16:59:11 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:58.120 16:59:11 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:58.120 16:59:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.120 ************************************ 00:30:58.120 START TEST nvmf_interrupt 00:30:58.120 ************************************ 00:30:58.120 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:58.120 * Looking for test storage... 00:30:58.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:58.120 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:58.120 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:30:58.120 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:58.120 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:58.120 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:58.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.121 --rc genhtml_branch_coverage=1 00:30:58.121 --rc genhtml_function_coverage=1 00:30:58.121 --rc genhtml_legend=1 00:30:58.121 --rc geninfo_all_blocks=1 00:30:58.121 --rc geninfo_unexecuted_blocks=1 00:30:58.121 00:30:58.121 ' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:58.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.121 --rc genhtml_branch_coverage=1 00:30:58.121 --rc genhtml_function_coverage=1 00:30:58.121 --rc genhtml_legend=1 00:30:58.121 --rc geninfo_all_blocks=1 00:30:58.121 --rc geninfo_unexecuted_blocks=1 00:30:58.121 00:30:58.121 ' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:58.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.121 --rc genhtml_branch_coverage=1 00:30:58.121 --rc genhtml_function_coverage=1 00:30:58.121 --rc genhtml_legend=1 00:30:58.121 --rc geninfo_all_blocks=1 00:30:58.121 --rc geninfo_unexecuted_blocks=1 00:30:58.121 00:30:58.121 ' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:58.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.121 --rc genhtml_branch_coverage=1 00:30:58.121 --rc genhtml_function_coverage=1 00:30:58.121 --rc genhtml_legend=1 00:30:58.121 --rc geninfo_all_blocks=1 00:30:58.121 --rc geninfo_unexecuted_blocks=1 00:30:58.121 00:30:58.121 ' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:30:58.121 16:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.023 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.023 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:00.023 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:00.023 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:00.023 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:00.023 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:00.023 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:00.023 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:00.023 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:00.282 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:00.282 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:00.282 Found net devices under 0000:09:00.0: cvl_0_0 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:00.282 Found net devices under 0000:09:00.1: cvl_0_1 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:00.282 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:00.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:31:00.283 00:31:00.283 --- 10.0.0.2 ping statistics --- 00:31:00.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.283 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:31:00.283 00:31:00.283 --- 10.0.0.1 ping statistics --- 00:31:00.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.283 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=2523415 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 2523415 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 2523415 ']' 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:00.283 16:59:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.283 [2024-10-17 16:59:13.923859] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:00.283 [2024-10-17 16:59:13.925033] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:31:00.283 [2024-10-17 16:59:13.925090] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.541 [2024-10-17 16:59:13.992239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:00.541 [2024-10-17 16:59:14.049735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.541 [2024-10-17 16:59:14.049788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.541 [2024-10-17 16:59:14.049817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.541 [2024-10-17 16:59:14.049829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.541 [2024-10-17 16:59:14.049838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.541 [2024-10-17 16:59:14.051187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.541 [2024-10-17 16:59:14.051193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.541 [2024-10-17 16:59:14.132097] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:00.541 [2024-10-17 16:59:14.132131] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:00.541 [2024-10-17 16:59:14.132402] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:00.541 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:00.541 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:31:00.541 16:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:00.541 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:00.541 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.541 16:59:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.541 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:00.541 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:00.542 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:00.542 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:00.542 5000+0 records in 00:31:00.542 5000+0 records out 00:31:00.542 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0114235 s, 896 MB/s 00:31:00.542 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:00.542 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.542 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.802 AIO0 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.802 [2024-10-17 16:59:14.247913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.802 [2024-10-17 16:59:14.272158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2523415 0 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2523415 0 idle 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2523415 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2523415 -w 256 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2523415 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.26 reactor_0' 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2523415 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.26 reactor_0 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2523415 1 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2523415 1 idle 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2523415 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2523415 -w 256 00:31:00.802 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2523484 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2523484 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2523579 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2523415 0 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2523415 0 busy 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2523415 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2523415 -w 256 00:31:01.062 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:01.322 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2523415 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.26 reactor_0' 00:31:01.322 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2523415 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.26 reactor_0 00:31:01.322 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:01.322 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:01.322 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:01.322 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:01.322 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:01.322 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:01.322 16:59:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:31:02.258 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:31:02.258 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:02.258 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2523415 -w 256 00:31:02.258 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2523415 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.55 reactor_0' 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2523415 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.55 reactor_0 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2523415 1 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2523415 1 busy 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2523415 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2523415 -w 256 00:31:02.516 16:59:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:02.516 16:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2523484 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:01.31 reactor_1' 00:31:02.516 16:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2523484 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:01.31 reactor_1 00:31:02.516 16:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:02.516 16:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:02.516 16:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:02.516 16:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:02.516 16:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:02.516 16:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:02.516 16:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:02.516 16:59:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:02.516 16:59:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2523579 00:31:12.500 Initializing NVMe Controllers 00:31:12.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:12.500 Controller IO queue size 256, less than required. 00:31:12.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:12.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:12.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:12.500 Initialization complete. Launching workers. 00:31:12.500 ======================================================== 00:31:12.500 Latency(us) 00:31:12.500 Device Information : IOPS MiB/s Average min max 00:31:12.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13797.30 53.90 18566.51 4438.55 22338.19 00:31:12.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 12628.10 49.33 20286.60 4403.96 22832.34 00:31:12.500 ======================================================== 00:31:12.500 Total : 26425.40 103.22 19388.50 4403.96 22832.34 00:31:12.500 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2523415 0 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2523415 0 idle 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2523415 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2523415 -w 256 00:31:12.500 16:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2523415 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.21 reactor_0' 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2523415 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.21 reactor_0 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2523415 1 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2523415 1 idle 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2523415 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2523415 -w 256 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2523484 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1' 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2523484 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:12.500 16:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:31:12.501 16:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:31:12.501 16:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:12.501 16:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:12.501 16:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2523415 0 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2523415 0 idle 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2523415 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2523415 -w 256 00:31:13.881 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2523415 root 20 0 128.2g 60672 34944 S 6.7 0.1 0:20.31 reactor_0' 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2523415 root 20 0 128.2g 60672 34944 S 6.7 0.1 0:20.31 reactor_0 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2523415 1 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2523415 1 idle 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2523415 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:14.139 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:14.140 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:14.140 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:14.140 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2523415 -w 256 00:31:14.140 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2523484 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1' 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2523484 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:14.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.398 16:59:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.398 rmmod nvme_tcp 00:31:14.398 rmmod nvme_fabrics 00:31:14.398 rmmod nvme_keyring 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 2523415 ']' 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 2523415 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 2523415 ']' 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 2523415 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2523415 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2523415' 00:31:14.398 killing process with pid 2523415 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 2523415 00:31:14.398 16:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 2523415 00:31:14.656 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:14.656 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:14.656 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:14.656 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:31:14.656 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:31:14.656 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:14.656 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:31:14.656 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.656 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.656 16:59:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.656 16:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:14.656 16:59:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.190 16:59:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.191 00:31:17.191 real 0m18.756s 00:31:17.191 user 0m35.913s 00:31:17.191 sys 0m6.984s 00:31:17.191 16:59:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:17.191 16:59:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:17.191 ************************************ 00:31:17.191 END TEST nvmf_interrupt 00:31:17.191 ************************************ 00:31:17.191 00:31:17.191 real 24m53.523s 00:31:17.191 user 58m46.812s 00:31:17.191 sys 6m35.041s 00:31:17.191 16:59:30 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:17.191 16:59:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:17.191 ************************************ 00:31:17.191 END TEST nvmf_tcp 00:31:17.191 ************************************ 00:31:17.191 16:59:30 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:31:17.191 16:59:30 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:17.191 16:59:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:17.191 16:59:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:17.191 16:59:30 -- common/autotest_common.sh@10 -- # set +x 00:31:17.191 ************************************ 00:31:17.191 START TEST spdkcli_nvmf_tcp 00:31:17.191 ************************************ 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:17.191 * Looking for test storage... 00:31:17.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:17.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.191 --rc genhtml_branch_coverage=1 00:31:17.191 --rc genhtml_function_coverage=1 00:31:17.191 --rc genhtml_legend=1 00:31:17.191 --rc geninfo_all_blocks=1 00:31:17.191 --rc geninfo_unexecuted_blocks=1 00:31:17.191 00:31:17.191 ' 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:17.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.191 --rc genhtml_branch_coverage=1 00:31:17.191 --rc genhtml_function_coverage=1 00:31:17.191 --rc genhtml_legend=1 00:31:17.191 --rc geninfo_all_blocks=1 00:31:17.191 --rc geninfo_unexecuted_blocks=1 00:31:17.191 00:31:17.191 ' 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:17.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.191 --rc genhtml_branch_coverage=1 00:31:17.191 --rc genhtml_function_coverage=1 00:31:17.191 --rc genhtml_legend=1 00:31:17.191 --rc geninfo_all_blocks=1 00:31:17.191 --rc geninfo_unexecuted_blocks=1 00:31:17.191 00:31:17.191 ' 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:17.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.191 --rc genhtml_branch_coverage=1 00:31:17.191 --rc genhtml_function_coverage=1 00:31:17.191 --rc genhtml_legend=1 00:31:17.191 --rc geninfo_all_blocks=1 00:31:17.191 --rc geninfo_unexecuted_blocks=1 00:31:17.191 00:31:17.191 ' 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:17.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:17.191 16:59:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2525592 00:31:17.192 16:59:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:17.192 16:59:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2525592 00:31:17.192 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2525592 ']' 00:31:17.192 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.192 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:17.192 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.192 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:17.192 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:17.192 [2024-10-17 16:59:30.687023] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:31:17.192 [2024-10-17 16:59:30.687110] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2525592 ] 00:31:17.192 [2024-10-17 16:59:30.748988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:17.192 [2024-10-17 16:59:30.809686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.192 [2024-10-17 16:59:30.809690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.450 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:17.450 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:31:17.450 16:59:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:17.450 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:17.450 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:17.450 16:59:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:17.450 16:59:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:17.450 16:59:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:17.450 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:17.450 16:59:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:17.450 16:59:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:17.450 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:17.450 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:17.450 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:17.450 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:17.450 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:17.450 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:17.450 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:17.450 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:17.450 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:17.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:17.450 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:17.450 ' 00:31:19.983 [2024-10-17 16:59:33.603662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.359 [2024-10-17 16:59:34.888243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:23.895 [2024-10-17 16:59:37.251739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:25.806 [2024-10-17 16:59:39.274226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:27.183 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:27.183 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:27.183 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:27.183 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:27.183 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:27.183 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:27.183 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:27.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:27.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:27.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:27.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:27.183 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:27.441 16:59:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:27.441 16:59:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:27.441 16:59:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:27.441 16:59:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:27.441 16:59:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:27.441 16:59:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:27.441 16:59:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:27.441 16:59:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:27.698 16:59:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:27.955 16:59:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:27.955 16:59:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:27.955 16:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:27.955 16:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:27.955 16:59:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:27.955 16:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:27.955 16:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:27.955 16:59:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:27.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:27.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:27.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:27.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:27.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:27.956 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:27.956 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:27.956 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:27.956 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:27.956 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:27.956 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:27.956 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:27.956 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:27.956 ' 00:31:33.317 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:33.317 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:33.317 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:33.317 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:33.317 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:33.317 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:33.317 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:33.317 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:33.317 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:33.317 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:33.317 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:33.317 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:33.317 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:33.317 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2525592 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2525592 ']' 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2525592 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2525592 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2525592' 00:31:33.317 killing process with pid 2525592 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2525592 00:31:33.317 16:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2525592 00:31:33.576 16:59:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:33.576 16:59:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:33.576 16:59:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2525592 ']' 00:31:33.576 16:59:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2525592 00:31:33.576 16:59:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2525592 ']' 00:31:33.576 16:59:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2525592 00:31:33.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2525592) - No such process 00:31:33.576 16:59:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2525592 is not found' 00:31:33.576 Process with pid 2525592 is not found 00:31:33.576 16:59:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:33.576 16:59:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:33.576 16:59:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:33.576 00:31:33.576 real 0m16.675s 00:31:33.576 user 0m35.613s 00:31:33.576 sys 0m0.739s 00:31:33.576 16:59:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:33.576 16:59:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:33.576 ************************************ 00:31:33.576 END TEST spdkcli_nvmf_tcp 00:31:33.576 ************************************ 00:31:33.576 16:59:47 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:33.576 16:59:47 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:33.576 16:59:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:33.576 16:59:47 -- common/autotest_common.sh@10 -- # set +x 00:31:33.576 ************************************ 00:31:33.576 START TEST nvmf_identify_passthru 00:31:33.576 ************************************ 00:31:33.576 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:33.576 * Looking for test storage... 00:31:33.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:33.576 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:33.576 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:31:33.576 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:33.835 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:33.835 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.835 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:33.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.835 --rc genhtml_branch_coverage=1 00:31:33.835 --rc genhtml_function_coverage=1 00:31:33.835 --rc genhtml_legend=1 00:31:33.835 --rc geninfo_all_blocks=1 00:31:33.835 --rc geninfo_unexecuted_blocks=1 00:31:33.835 00:31:33.835 ' 00:31:33.835 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:33.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.835 --rc genhtml_branch_coverage=1 00:31:33.835 --rc genhtml_function_coverage=1 00:31:33.835 --rc genhtml_legend=1 00:31:33.835 --rc geninfo_all_blocks=1 00:31:33.835 --rc geninfo_unexecuted_blocks=1 00:31:33.835 00:31:33.835 ' 00:31:33.835 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:33.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.835 --rc genhtml_branch_coverage=1 00:31:33.835 --rc genhtml_function_coverage=1 00:31:33.835 --rc genhtml_legend=1 00:31:33.835 --rc geninfo_all_blocks=1 00:31:33.835 --rc geninfo_unexecuted_blocks=1 00:31:33.835 00:31:33.835 ' 00:31:33.835 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:33.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.835 --rc genhtml_branch_coverage=1 00:31:33.835 --rc genhtml_function_coverage=1 00:31:33.835 --rc genhtml_legend=1 00:31:33.835 --rc geninfo_all_blocks=1 00:31:33.835 --rc geninfo_unexecuted_blocks=1 00:31:33.835 00:31:33.835 ' 00:31:33.835 16:59:47 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.835 16:59:47 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.835 16:59:47 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.835 16:59:47 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.835 16:59:47 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.835 16:59:47 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:33.835 16:59:47 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:33.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.835 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.836 16:59:47 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.836 16:59:47 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.836 16:59:47 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.836 16:59:47 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.836 16:59:47 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.836 16:59:47 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.836 16:59:47 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.836 16:59:47 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.836 16:59:47 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:33.836 16:59:47 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.836 16:59:47 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:33.836 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:33.836 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.836 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:33.836 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:33.836 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:33.836 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.836 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:33.836 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.836 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:33.836 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:33.836 16:59:47 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.836 16:59:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.738 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:35.739 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:35.739 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:35.739 Found net devices under 0000:09:00.0: cvl_0_0 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:35.739 Found net devices under 0000:09:00.1: cvl_0_1 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.739 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.997 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.997 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.997 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.997 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.997 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.997 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.997 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.997 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:31:35.997 00:31:35.997 --- 10.0.0.2 ping statistics --- 00:31:35.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.997 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:31:35.997 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:31:35.997 00:31:35.997 --- 10.0.0.1 ping statistics --- 00:31:35.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.997 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:31:35.997 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.997 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:31:35.998 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:35.998 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.998 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:35.998 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:35.998 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.998 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:35.998 16:59:49 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:35.998 16:59:49 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:35.998 16:59:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:31:35.998 16:59:49 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:0b:00.0 00:31:35.998 16:59:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:31:35.998 16:59:49 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:31:35.998 16:59:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:31:35.998 16:59:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:35.998 16:59:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:40.184 16:59:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:31:40.184 16:59:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:31:40.184 16:59:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:40.184 16:59:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:44.368 16:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:44.368 16:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:44.368 16:59:57 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.368 16:59:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:44.368 16:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:44.368 16:59:57 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:44.368 16:59:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:44.368 16:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2530238 00:31:44.368 16:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:44.368 16:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:44.368 16:59:57 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2530238 00:31:44.368 16:59:57 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2530238 ']' 00:31:44.368 16:59:57 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.368 16:59:57 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:44.368 16:59:57 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.368 16:59:57 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:44.368 16:59:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:44.368 [2024-10-17 16:59:58.006573] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:31:44.368 [2024-10-17 16:59:58.006656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.627 [2024-10-17 16:59:58.072448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:44.627 [2024-10-17 16:59:58.130057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.627 [2024-10-17 16:59:58.130111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.627 [2024-10-17 16:59:58.130142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.627 [2024-10-17 16:59:58.130154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.627 [2024-10-17 16:59:58.130164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.627 [2024-10-17 16:59:58.131660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.627 [2024-10-17 16:59:58.131725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.627 [2024-10-17 16:59:58.131791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:44.627 [2024-10-17 16:59:58.131794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.627 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:44.627 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:31:44.627 16:59:58 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:44.627 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.627 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:44.627 INFO: Log level set to 20 00:31:44.627 INFO: Requests: 00:31:44.627 { 00:31:44.627 "jsonrpc": "2.0", 00:31:44.627 "method": "nvmf_set_config", 00:31:44.627 "id": 1, 00:31:44.627 "params": { 00:31:44.627 "admin_cmd_passthru": { 00:31:44.627 "identify_ctrlr": true 00:31:44.627 } 00:31:44.627 } 00:31:44.627 } 00:31:44.627 00:31:44.627 INFO: response: 00:31:44.627 { 00:31:44.627 "jsonrpc": "2.0", 00:31:44.627 "id": 1, 00:31:44.627 "result": true 00:31:44.627 } 00:31:44.627 00:31:44.627 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.627 16:59:58 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:44.627 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.627 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:44.627 INFO: Setting log level to 20 00:31:44.627 INFO: Setting log level to 20 00:31:44.627 INFO: Log level set to 20 00:31:44.627 INFO: Log level set to 20 00:31:44.627 INFO: Requests: 00:31:44.627 { 00:31:44.627 "jsonrpc": "2.0", 00:31:44.627 "method": "framework_start_init", 00:31:44.627 "id": 1 00:31:44.627 } 00:31:44.627 00:31:44.627 INFO: Requests: 00:31:44.627 { 00:31:44.627 "jsonrpc": "2.0", 00:31:44.627 "method": "framework_start_init", 00:31:44.627 "id": 1 00:31:44.627 } 00:31:44.627 00:31:44.886 [2024-10-17 16:59:58.334712] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:44.886 INFO: response: 00:31:44.886 { 00:31:44.886 "jsonrpc": "2.0", 00:31:44.886 "id": 1, 00:31:44.886 "result": true 00:31:44.886 } 00:31:44.886 00:31:44.886 INFO: response: 00:31:44.886 { 00:31:44.886 "jsonrpc": "2.0", 00:31:44.886 "id": 1, 00:31:44.886 "result": true 00:31:44.886 } 00:31:44.886 00:31:44.886 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.886 16:59:58 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:44.886 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.886 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:44.886 INFO: Setting log level to 40 00:31:44.886 INFO: Setting log level to 40 00:31:44.886 INFO: Setting log level to 40 00:31:44.886 [2024-10-17 16:59:58.344851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:44.886 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.886 16:59:58 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:44.886 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.886 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:44.886 16:59:58 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:31:44.886 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.886 16:59:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.166 Nvme0n1 00:31:48.166 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.166 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:48.166 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.166 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.166 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.166 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:48.166 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.166 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.166 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.166 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:48.166 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.166 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.166 [2024-10-17 17:00:01.251660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.166 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.166 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:48.166 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.166 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.166 [ 00:31:48.166 { 00:31:48.166 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:48.166 "subtype": "Discovery", 00:31:48.166 "listen_addresses": [], 00:31:48.166 "allow_any_host": true, 00:31:48.166 "hosts": [] 00:31:48.166 }, 00:31:48.166 { 00:31:48.166 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:48.167 "subtype": "NVMe", 00:31:48.167 "listen_addresses": [ 00:31:48.167 { 00:31:48.167 "trtype": "TCP", 00:31:48.167 "adrfam": "IPv4", 00:31:48.167 "traddr": "10.0.0.2", 00:31:48.167 "trsvcid": "4420" 00:31:48.167 } 00:31:48.167 ], 00:31:48.167 "allow_any_host": true, 00:31:48.167 "hosts": [], 00:31:48.167 "serial_number": "SPDK00000000000001", 00:31:48.167 "model_number": "SPDK bdev Controller", 00:31:48.167 "max_namespaces": 1, 00:31:48.167 "min_cntlid": 1, 00:31:48.167 "max_cntlid": 65519, 00:31:48.167 "namespaces": [ 00:31:48.167 { 00:31:48.167 "nsid": 1, 00:31:48.167 "bdev_name": "Nvme0n1", 00:31:48.167 "name": "Nvme0n1", 00:31:48.167 "nguid": "8C0523170BBC486F820C969BFF1F3912", 00:31:48.167 "uuid": "8c052317-0bbc-486f-820c-969bff1f3912" 00:31:48.167 } 00:31:48.167 ] 00:31:48.167 } 00:31:48.167 ] 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:48.167 17:00:01 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:48.167 17:00:01 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:48.167 17:00:01 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:31:48.167 17:00:01 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:48.167 17:00:01 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:31:48.167 17:00:01 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:48.167 17:00:01 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:48.167 rmmod nvme_tcp 00:31:48.167 rmmod nvme_fabrics 00:31:48.167 rmmod nvme_keyring 00:31:48.167 17:00:01 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:48.167 17:00:01 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:31:48.167 17:00:01 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:31:48.167 17:00:01 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 2530238 ']' 00:31:48.167 17:00:01 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 2530238 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2530238 ']' 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2530238 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2530238 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2530238' 00:31:48.167 killing process with pid 2530238 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2530238 00:31:48.167 17:00:01 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2530238 00:31:49.540 17:00:03 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:49.540 17:00:03 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:49.540 17:00:03 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:49.540 17:00:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:31:49.540 17:00:03 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:31:49.540 17:00:03 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:49.540 17:00:03 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:31:49.540 17:00:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:49.540 17:00:03 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:49.540 17:00:03 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.540 17:00:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:49.540 17:00:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.069 17:00:05 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:52.069 00:31:52.069 real 0m18.093s 00:31:52.069 user 0m26.024s 00:31:52.069 sys 0m3.150s 00:31:52.069 17:00:05 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:52.069 17:00:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:52.069 ************************************ 00:31:52.069 END TEST nvmf_identify_passthru 00:31:52.069 ************************************ 00:31:52.069 17:00:05 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:52.069 17:00:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:52.069 17:00:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:52.069 17:00:05 -- common/autotest_common.sh@10 -- # set +x 00:31:52.069 ************************************ 00:31:52.069 START TEST nvmf_dif 00:31:52.069 ************************************ 00:31:52.069 17:00:05 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:52.069 * Looking for test storage... 00:31:52.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:52.069 17:00:05 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:52.069 17:00:05 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:31:52.069 17:00:05 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:52.069 17:00:05 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:31:52.069 17:00:05 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.069 17:00:05 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:52.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.069 --rc genhtml_branch_coverage=1 00:31:52.069 --rc genhtml_function_coverage=1 00:31:52.069 --rc genhtml_legend=1 00:31:52.069 --rc geninfo_all_blocks=1 00:31:52.069 --rc geninfo_unexecuted_blocks=1 00:31:52.069 00:31:52.069 ' 00:31:52.069 17:00:05 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:52.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.069 --rc genhtml_branch_coverage=1 00:31:52.069 --rc genhtml_function_coverage=1 00:31:52.069 --rc genhtml_legend=1 00:31:52.069 --rc geninfo_all_blocks=1 00:31:52.069 --rc geninfo_unexecuted_blocks=1 00:31:52.069 00:31:52.069 ' 00:31:52.069 17:00:05 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:52.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.069 --rc genhtml_branch_coverage=1 00:31:52.069 --rc genhtml_function_coverage=1 00:31:52.069 --rc genhtml_legend=1 00:31:52.069 --rc geninfo_all_blocks=1 00:31:52.069 --rc geninfo_unexecuted_blocks=1 00:31:52.069 00:31:52.069 ' 00:31:52.069 17:00:05 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:52.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.069 --rc genhtml_branch_coverage=1 00:31:52.069 --rc genhtml_function_coverage=1 00:31:52.069 --rc genhtml_legend=1 00:31:52.069 --rc geninfo_all_blocks=1 00:31:52.069 --rc geninfo_unexecuted_blocks=1 00:31:52.069 00:31:52.069 ' 00:31:52.069 17:00:05 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.069 17:00:05 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.069 17:00:05 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.069 17:00:05 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.069 17:00:05 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.069 17:00:05 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:52.069 17:00:05 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:52.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:52.069 17:00:05 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:52.069 17:00:05 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:52.069 17:00:05 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:52.069 17:00:05 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:52.070 17:00:05 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:52.070 17:00:05 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:52.070 17:00:05 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:52.070 17:00:05 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.070 17:00:05 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:52.070 17:00:05 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:52.070 17:00:05 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:52.070 17:00:05 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.070 17:00:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:52.070 17:00:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.070 17:00:05 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:52.070 17:00:05 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:52.070 17:00:05 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:31:52.070 17:00:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:53.969 17:00:07 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:53.969 17:00:07 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:31:53.969 17:00:07 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:53.969 17:00:07 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:53.969 17:00:07 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:53.969 17:00:07 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:53.969 17:00:07 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:53.969 17:00:07 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:31:53.969 17:00:07 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:53.969 17:00:07 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:53.970 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:53.970 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:53.970 Found net devices under 0000:09:00.0: cvl_0_0 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:53.970 Found net devices under 0000:09:00.1: cvl_0_1 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:53.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:53.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:31:53.970 00:31:53.970 --- 10.0.0.2 ping statistics --- 00:31:53.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.970 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:53.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:53.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:31:53.970 00:31:53.970 --- 10.0.0.1 ping statistics --- 00:31:53.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.970 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:31:53.970 17:00:07 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:54.905 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:54.905 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:54.905 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:54.905 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:54.905 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:54.905 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:54.905 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:54.905 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:54.905 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:54.905 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:54.905 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:54.905 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:54.905 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:54.905 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:54.905 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:54.905 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:54.905 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:55.164 17:00:08 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.164 17:00:08 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:55.164 17:00:08 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:55.164 17:00:08 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.164 17:00:08 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:55.164 17:00:08 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:55.164 17:00:08 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:55.164 17:00:08 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:55.164 17:00:08 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:55.164 17:00:08 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:55.165 17:00:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:55.165 17:00:08 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=2533959 00:31:55.165 17:00:08 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:55.165 17:00:08 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 2533959 00:31:55.165 17:00:08 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2533959 ']' 00:31:55.165 17:00:08 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.165 17:00:08 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:55.165 17:00:08 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.165 17:00:08 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:55.165 17:00:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:55.165 [2024-10-17 17:00:08.813862] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:31:55.165 [2024-10-17 17:00:08.813945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.423 [2024-10-17 17:00:08.900326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.423 [2024-10-17 17:00:08.975901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.423 [2024-10-17 17:00:08.975962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.423 [2024-10-17 17:00:08.976019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.423 [2024-10-17 17:00:08.976042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.423 [2024-10-17 17:00:08.976061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.423 [2024-10-17 17:00:08.976790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.683 17:00:09 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:55.683 17:00:09 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:31:55.683 17:00:09 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:55.683 17:00:09 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:55.683 17:00:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:55.683 17:00:09 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.683 17:00:09 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:55.683 17:00:09 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:55.683 17:00:09 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.683 17:00:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:55.683 [2024-10-17 17:00:09.200812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.683 17:00:09 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.683 17:00:09 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:55.683 17:00:09 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:55.683 17:00:09 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:55.683 17:00:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:55.683 ************************************ 00:31:55.683 START TEST fio_dif_1_default 00:31:55.683 ************************************ 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:55.683 bdev_null0 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.683 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:55.684 [2024-10-17 17:00:09.257142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:55.684 { 00:31:55.684 "params": { 00:31:55.684 "name": "Nvme$subsystem", 00:31:55.684 "trtype": "$TEST_TRANSPORT", 00:31:55.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:55.684 "adrfam": "ipv4", 00:31:55.684 "trsvcid": "$NVMF_PORT", 00:31:55.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:55.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:55.684 "hdgst": ${hdgst:-false}, 00:31:55.684 "ddgst": ${ddgst:-false} 00:31:55.684 }, 00:31:55.684 "method": "bdev_nvme_attach_controller" 00:31:55.684 } 00:31:55.684 EOF 00:31:55.684 )") 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:55.684 "params": { 00:31:55.684 "name": "Nvme0", 00:31:55.684 "trtype": "tcp", 00:31:55.684 "traddr": "10.0.0.2", 00:31:55.684 "adrfam": "ipv4", 00:31:55.684 "trsvcid": "4420", 00:31:55.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:55.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:55.684 "hdgst": false, 00:31:55.684 "ddgst": false 00:31:55.684 }, 00:31:55.684 "method": "bdev_nvme_attach_controller" 00:31:55.684 }' 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:55.684 17:00:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:55.942 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:55.942 fio-3.35 00:31:55.942 Starting 1 thread 00:32:08.136 00:32:08.136 filename0: (groupid=0, jobs=1): err= 0: pid=2534313: Thu Oct 17 17:00:20 2024 00:32:08.136 read: IOPS=189, BW=760KiB/s (778kB/s)(7616KiB/10025msec) 00:32:08.136 slat (nsec): min=7039, max=85846, avg=8937.71, stdev=3340.40 00:32:08.136 clat (usec): min=559, max=42424, avg=21031.66, stdev=20424.43 00:32:08.137 lat (usec): min=566, max=42436, avg=21040.60, stdev=20424.22 00:32:08.137 clat percentiles (usec): 00:32:08.137 | 1.00th=[ 578], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 603], 00:32:08.137 | 30.00th=[ 611], 40.00th=[ 627], 50.00th=[ 758], 60.00th=[41157], 00:32:08.137 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:32:08.137 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:08.137 | 99.99th=[42206] 00:32:08.137 bw ( KiB/s): min= 672, max= 832, per=100.00%, avg=760.00, stdev=30.93, samples=20 00:32:08.137 iops : min= 168, max= 208, avg=190.00, stdev= 7.73, samples=20 00:32:08.137 lat (usec) : 750=49.89%, 1000=0.11% 00:32:08.137 lat (msec) : 50=50.00% 00:32:08.137 cpu : usr=91.26%, sys=8.43%, ctx=16, majf=0, minf=245 00:32:08.137 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.137 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.137 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:08.137 00:32:08.137 Run status group 0 (all jobs): 00:32:08.137 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7616KiB (7799kB), run=10025-10025msec 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.137 00:32:08.137 real 0m11.349s 00:32:08.137 user 0m10.509s 00:32:08.137 sys 0m1.129s 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:08.137 ************************************ 00:32:08.137 END TEST fio_dif_1_default 00:32:08.137 ************************************ 00:32:08.137 17:00:20 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:08.137 17:00:20 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:08.137 17:00:20 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:08.137 17:00:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:08.137 ************************************ 00:32:08.137 START TEST fio_dif_1_multi_subsystems 00:32:08.137 ************************************ 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.137 bdev_null0 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.137 [2024-10-17 17:00:20.654019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.137 bdev_null1 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:08.137 { 00:32:08.137 "params": { 00:32:08.137 "name": "Nvme$subsystem", 00:32:08.137 "trtype": "$TEST_TRANSPORT", 00:32:08.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:08.137 "adrfam": "ipv4", 00:32:08.137 "trsvcid": "$NVMF_PORT", 00:32:08.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:08.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:08.137 "hdgst": ${hdgst:-false}, 00:32:08.137 "ddgst": ${ddgst:-false} 00:32:08.137 }, 00:32:08.137 "method": "bdev_nvme_attach_controller" 00:32:08.137 } 00:32:08.137 EOF 00:32:08.137 )") 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:08.137 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:08.138 { 00:32:08.138 "params": { 00:32:08.138 "name": "Nvme$subsystem", 00:32:08.138 "trtype": "$TEST_TRANSPORT", 00:32:08.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:08.138 "adrfam": "ipv4", 00:32:08.138 "trsvcid": "$NVMF_PORT", 00:32:08.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:08.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:08.138 "hdgst": ${hdgst:-false}, 00:32:08.138 "ddgst": ${ddgst:-false} 00:32:08.138 }, 00:32:08.138 "method": "bdev_nvme_attach_controller" 00:32:08.138 } 00:32:08.138 EOF 00:32:08.138 )") 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:08.138 "params": { 00:32:08.138 "name": "Nvme0", 00:32:08.138 "trtype": "tcp", 00:32:08.138 "traddr": "10.0.0.2", 00:32:08.138 "adrfam": "ipv4", 00:32:08.138 "trsvcid": "4420", 00:32:08.138 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:08.138 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:08.138 "hdgst": false, 00:32:08.138 "ddgst": false 00:32:08.138 }, 00:32:08.138 "method": "bdev_nvme_attach_controller" 00:32:08.138 },{ 00:32:08.138 "params": { 00:32:08.138 "name": "Nvme1", 00:32:08.138 "trtype": "tcp", 00:32:08.138 "traddr": "10.0.0.2", 00:32:08.138 "adrfam": "ipv4", 00:32:08.138 "trsvcid": "4420", 00:32:08.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:08.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:08.138 "hdgst": false, 00:32:08.138 "ddgst": false 00:32:08.138 }, 00:32:08.138 "method": "bdev_nvme_attach_controller" 00:32:08.138 }' 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:08.138 17:00:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:08.138 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:08.138 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:08.138 fio-3.35 00:32:08.138 Starting 2 threads 00:32:20.336 00:32:20.336 filename0: (groupid=0, jobs=1): err= 0: pid=2535867: Thu Oct 17 17:00:31 2024 00:32:20.336 read: IOPS=190, BW=761KiB/s (779kB/s)(7616KiB/10008msec) 00:32:20.336 slat (nsec): min=4125, max=66281, avg=10512.48, stdev=5099.28 00:32:20.336 clat (usec): min=573, max=45886, avg=20992.08, stdev=20330.79 00:32:20.336 lat (usec): min=581, max=45899, avg=21002.59, stdev=20329.65 00:32:20.336 clat percentiles (usec): 00:32:20.336 | 1.00th=[ 594], 5.00th=[ 611], 10.00th=[ 619], 20.00th=[ 627], 00:32:20.336 | 30.00th=[ 644], 40.00th=[ 660], 50.00th=[ 1090], 60.00th=[41157], 00:32:20.336 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:20.336 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:32:20.336 | 99.99th=[45876] 00:32:20.336 bw ( KiB/s): min= 672, max= 832, per=66.05%, avg=760.00, stdev=30.93, samples=20 00:32:20.336 iops : min= 168, max= 208, avg=190.00, stdev= 7.73, samples=20 00:32:20.336 lat (usec) : 750=44.54%, 1000=4.78% 00:32:20.336 lat (msec) : 2=0.68%, 50=50.00% 00:32:20.336 cpu : usr=97.57%, sys=2.13%, ctx=22, majf=0, minf=165 00:32:20.336 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:20.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.336 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.336 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:20.336 filename1: (groupid=0, jobs=1): err= 0: pid=2535868: Thu Oct 17 17:00:31 2024 00:32:20.336 read: IOPS=98, BW=392KiB/s (401kB/s)(3936KiB/10040msec) 00:32:20.336 slat (nsec): min=5704, max=76258, avg=12379.00, stdev=6018.07 00:32:20.336 clat (usec): min=599, max=47442, avg=40771.80, stdev=3673.01 00:32:20.336 lat (usec): min=609, max=47467, avg=40784.18, stdev=3673.13 00:32:20.336 clat percentiles (usec): 00:32:20.336 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:20.336 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:20.336 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:32:20.336 | 99.00th=[42206], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:32:20.336 | 99.99th=[47449] 00:32:20.336 bw ( KiB/s): min= 352, max= 448, per=34.07%, avg=392.00, stdev=20.44, samples=20 00:32:20.336 iops : min= 88, max= 112, avg=98.00, stdev= 5.11, samples=20 00:32:20.336 lat (usec) : 750=0.81% 00:32:20.336 lat (msec) : 50=99.19% 00:32:20.336 cpu : usr=97.38%, sys=2.32%, ctx=9, majf=0, minf=173 00:32:20.336 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:20.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.336 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.336 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:20.336 00:32:20.336 Run status group 0 (all jobs): 00:32:20.336 READ: bw=1151KiB/s (1178kB/s), 392KiB/s-761KiB/s (401kB/s-779kB/s), io=11.3MiB (11.8MB), run=10008-10040msec 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.336 00:32:20.336 real 0m11.612s 00:32:20.336 user 0m21.214s 00:32:20.336 sys 0m0.791s 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:20.336 17:00:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:20.336 ************************************ 00:32:20.336 END TEST fio_dif_1_multi_subsystems 00:32:20.336 ************************************ 00:32:20.336 17:00:32 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:20.336 17:00:32 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:20.336 17:00:32 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:20.336 17:00:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:20.336 ************************************ 00:32:20.336 START TEST fio_dif_rand_params 00:32:20.336 ************************************ 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:20.336 bdev_null0 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:20.336 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:20.337 [2024-10-17 17:00:32.312136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:20.337 { 00:32:20.337 "params": { 00:32:20.337 "name": "Nvme$subsystem", 00:32:20.337 "trtype": "$TEST_TRANSPORT", 00:32:20.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.337 "adrfam": "ipv4", 00:32:20.337 "trsvcid": "$NVMF_PORT", 00:32:20.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.337 "hdgst": ${hdgst:-false}, 00:32:20.337 "ddgst": ${ddgst:-false} 00:32:20.337 }, 00:32:20.337 "method": "bdev_nvme_attach_controller" 00:32:20.337 } 00:32:20.337 EOF 00:32:20.337 )") 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:20.337 "params": { 00:32:20.337 "name": "Nvme0", 00:32:20.337 "trtype": "tcp", 00:32:20.337 "traddr": "10.0.0.2", 00:32:20.337 "adrfam": "ipv4", 00:32:20.337 "trsvcid": "4420", 00:32:20.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:20.337 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:20.337 "hdgst": false, 00:32:20.337 "ddgst": false 00:32:20.337 }, 00:32:20.337 "method": "bdev_nvme_attach_controller" 00:32:20.337 }' 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:20.337 17:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:20.337 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:20.337 ... 00:32:20.337 fio-3.35 00:32:20.337 Starting 3 threads 00:32:24.577 00:32:24.577 filename0: (groupid=0, jobs=1): err= 0: pid=2537265: Thu Oct 17 17:00:38 2024 00:32:24.577 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(158MiB/5004msec) 00:32:24.577 slat (nsec): min=4898, max=66076, avg=16312.77, stdev=5372.52 00:32:24.577 clat (usec): min=5664, max=93085, avg=11845.24, stdev=4618.68 00:32:24.577 lat (usec): min=5678, max=93101, avg=11861.56, stdev=4618.63 00:32:24.577 clat percentiles (usec): 00:32:24.577 | 1.00th=[ 6849], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[10159], 00:32:24.577 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:32:24.577 | 70.00th=[12518], 80.00th=[13173], 90.00th=[13960], 95.00th=[14615], 00:32:24.577 | 99.00th=[16712], 99.50th=[51119], 99.90th=[53740], 99.95th=[92799], 00:32:24.577 | 99.99th=[92799] 00:32:24.577 bw ( KiB/s): min=26880, max=36864, per=35.74%, avg=32332.80, stdev=2992.88, samples=10 00:32:24.577 iops : min= 210, max= 288, avg=252.60, stdev=23.38, samples=10 00:32:24.577 lat (msec) : 10=18.18%, 20=80.95%, 100=0.87% 00:32:24.578 cpu : usr=87.89%, sys=8.75%, ctx=196, majf=0, minf=110 00:32:24.578 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.578 issued rwts: total=1265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.578 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:24.578 filename0: (groupid=0, jobs=1): err= 0: pid=2537266: Thu Oct 17 17:00:38 2024 00:32:24.578 read: IOPS=223, BW=27.9MiB/s (29.2MB/s)(141MiB/5044msec) 00:32:24.578 slat (nsec): min=4373, max=29111, avg=14237.26, stdev=1346.42 00:32:24.578 clat (usec): min=6828, max=52929, avg=13396.33, stdev=5471.66 00:32:24.578 lat (usec): min=6842, max=52943, avg=13410.57, stdev=5471.47 00:32:24.578 clat percentiles (usec): 00:32:24.578 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[11076], 00:32:24.578 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12649], 60.00th=[13435], 00:32:24.578 | 70.00th=[14091], 80.00th=[14746], 90.00th=[15533], 95.00th=[16319], 00:32:24.578 | 99.00th=[51119], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:32:24.578 | 99.99th=[52691] 00:32:24.578 bw ( KiB/s): min=17408, max=33792, per=31.75%, avg=28723.20, stdev=4279.96, samples=10 00:32:24.578 iops : min= 136, max= 264, avg=224.40, stdev=33.44, samples=10 00:32:24.578 lat (msec) : 10=8.80%, 20=89.42%, 50=0.36%, 100=1.42% 00:32:24.578 cpu : usr=93.58%, sys=5.95%, ctx=9, majf=0, minf=79 00:32:24.578 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.578 issued rwts: total=1125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.578 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:24.578 filename0: (groupid=0, jobs=1): err= 0: pid=2537267: Thu Oct 17 17:00:38 2024 00:32:24.578 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(147MiB/5044msec) 00:32:24.578 slat (nsec): min=4524, max=75038, avg=14413.77, stdev=2846.71 00:32:24.578 clat (usec): min=5020, max=55794, avg=12824.29, stdev=5510.32 00:32:24.578 lat (usec): min=5029, max=55808, avg=12838.70, stdev=5510.22 00:32:24.578 clat percentiles (usec): 00:32:24.578 | 1.00th=[ 7242], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10814], 00:32:24.578 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12256], 60.00th=[12649], 00:32:24.578 | 70.00th=[13304], 80.00th=[13829], 90.00th=[14615], 95.00th=[15401], 00:32:24.578 | 99.00th=[51643], 99.50th=[53216], 99.90th=[54264], 99.95th=[55837], 00:32:24.578 | 99.99th=[55837] 00:32:24.578 bw ( KiB/s): min=25344, max=33536, per=33.19%, avg=30028.80, stdev=2888.88, samples=10 00:32:24.578 iops : min= 198, max= 262, avg=234.60, stdev=22.57, samples=10 00:32:24.578 lat (msec) : 10=11.66%, 20=86.64%, 50=0.17%, 100=1.53% 00:32:24.578 cpu : usr=93.85%, sys=5.67%, ctx=10, majf=0, minf=119 00:32:24.578 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.578 issued rwts: total=1175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.578 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:24.578 00:32:24.578 Run status group 0 (all jobs): 00:32:24.578 READ: bw=88.3MiB/s (92.6MB/s), 27.9MiB/s-31.6MiB/s (29.2MB/s-33.1MB/s), io=446MiB (467MB), run=5004-5044msec 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 bdev_null0 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 [2024-10-17 17:00:38.577549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 bdev_null1 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 bdev_null2 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.145 { 00:32:25.145 "params": { 00:32:25.145 "name": "Nvme$subsystem", 00:32:25.145 "trtype": "$TEST_TRANSPORT", 00:32:25.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.145 "adrfam": "ipv4", 00:32:25.145 "trsvcid": "$NVMF_PORT", 00:32:25.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.145 "hdgst": ${hdgst:-false}, 00:32:25.145 "ddgst": ${ddgst:-false} 00:32:25.145 }, 00:32:25.145 "method": "bdev_nvme_attach_controller" 00:32:25.145 } 00:32:25.145 EOF 00:32:25.145 )") 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:25.145 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.146 { 00:32:25.146 "params": { 00:32:25.146 "name": "Nvme$subsystem", 00:32:25.146 "trtype": "$TEST_TRANSPORT", 00:32:25.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.146 "adrfam": "ipv4", 00:32:25.146 "trsvcid": "$NVMF_PORT", 00:32:25.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.146 "hdgst": ${hdgst:-false}, 00:32:25.146 "ddgst": ${ddgst:-false} 00:32:25.146 }, 00:32:25.146 "method": "bdev_nvme_attach_controller" 00:32:25.146 } 00:32:25.146 EOF 00:32:25.146 )") 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.146 { 00:32:25.146 "params": { 00:32:25.146 "name": "Nvme$subsystem", 00:32:25.146 "trtype": "$TEST_TRANSPORT", 00:32:25.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.146 "adrfam": "ipv4", 00:32:25.146 "trsvcid": "$NVMF_PORT", 00:32:25.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.146 "hdgst": ${hdgst:-false}, 00:32:25.146 "ddgst": ${ddgst:-false} 00:32:25.146 }, 00:32:25.146 "method": "bdev_nvme_attach_controller" 00:32:25.146 } 00:32:25.146 EOF 00:32:25.146 )") 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:25.146 "params": { 00:32:25.146 "name": "Nvme0", 00:32:25.146 "trtype": "tcp", 00:32:25.146 "traddr": "10.0.0.2", 00:32:25.146 "adrfam": "ipv4", 00:32:25.146 "trsvcid": "4420", 00:32:25.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:25.146 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:25.146 "hdgst": false, 00:32:25.146 "ddgst": false 00:32:25.146 }, 00:32:25.146 "method": "bdev_nvme_attach_controller" 00:32:25.146 },{ 00:32:25.146 "params": { 00:32:25.146 "name": "Nvme1", 00:32:25.146 "trtype": "tcp", 00:32:25.146 "traddr": "10.0.0.2", 00:32:25.146 "adrfam": "ipv4", 00:32:25.146 "trsvcid": "4420", 00:32:25.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:25.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:25.146 "hdgst": false, 00:32:25.146 "ddgst": false 00:32:25.146 }, 00:32:25.146 "method": "bdev_nvme_attach_controller" 00:32:25.146 },{ 00:32:25.146 "params": { 00:32:25.146 "name": "Nvme2", 00:32:25.146 "trtype": "tcp", 00:32:25.146 "traddr": "10.0.0.2", 00:32:25.146 "adrfam": "ipv4", 00:32:25.146 "trsvcid": "4420", 00:32:25.146 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:25.146 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:25.146 "hdgst": false, 00:32:25.146 "ddgst": false 00:32:25.146 }, 00:32:25.146 "method": "bdev_nvme_attach_controller" 00:32:25.146 }' 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:25.146 17:00:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.405 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:25.405 ... 00:32:25.405 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:25.405 ... 00:32:25.405 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:25.405 ... 00:32:25.405 fio-3.35 00:32:25.405 Starting 24 threads 00:32:37.608 00:32:37.608 filename0: (groupid=0, jobs=1): err= 0: pid=2538134: Thu Oct 17 17:00:49 2024 00:32:37.608 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10014msec) 00:32:37.608 slat (usec): min=8, max=106, avg=27.49, stdev=15.92 00:32:37.608 clat (usec): min=15146, max=57925, avg=34065.94, stdev=1869.43 00:32:37.608 lat (usec): min=15174, max=57945, avg=34093.43, stdev=1869.33 00:32:37.608 clat percentiles (usec): 00:32:37.608 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:32:37.608 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:32:37.608 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:32:37.608 | 99.00th=[40633], 99.50th=[43779], 99.90th=[57934], 99.95th=[57934], 00:32:37.608 | 99.99th=[57934] 00:32:37.608 bw ( KiB/s): min= 1792, max= 1920, per=4.21%, avg=1862.40, stdev=65.33, samples=20 00:32:37.608 iops : min= 448, max= 480, avg=465.60, stdev=16.33, samples=20 00:32:37.608 lat (msec) : 20=0.04%, 50=99.61%, 100=0.34% 00:32:37.608 cpu : usr=98.11%, sys=1.34%, ctx=43, majf=0, minf=73 00:32:37.608 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:37.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.608 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.608 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.608 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.608 filename0: (groupid=0, jobs=1): err= 0: pid=2538135: Thu Oct 17 17:00:49 2024 00:32:37.608 read: IOPS=458, BW=1836KiB/s (1880kB/s)(18.2MiB/10157msec) 00:32:37.608 slat (usec): min=8, max=150, avg=37.08, stdev=21.37 00:32:37.608 clat (msec): min=14, max=168, avg=34.53, stdev= 8.29 00:32:37.608 lat (msec): min=14, max=168, avg=34.57, stdev= 8.29 00:32:37.608 clat percentiles (msec): 00:32:37.608 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.608 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.608 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.608 | 99.00th=[ 58], 99.50th=[ 59], 99.90th=[ 167], 99.95th=[ 167], 00:32:37.608 | 99.99th=[ 169] 00:32:37.608 bw ( KiB/s): min= 1648, max= 1920, per=4.20%, avg=1858.00, stdev=75.84, samples=20 00:32:37.608 iops : min= 412, max= 480, avg=464.50, stdev=18.96, samples=20 00:32:37.608 lat (msec) : 20=0.30%, 50=98.58%, 100=0.77%, 250=0.34% 00:32:37.608 cpu : usr=98.13%, sys=1.30%, ctx=35, majf=0, minf=48 00:32:37.608 IO depths : 1=4.0%, 2=10.2%, 4=24.8%, 8=52.4%, 16=8.5%, 32=0.0%, >=64=0.0% 00:32:37.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.608 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.608 issued rwts: total=4662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.608 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.608 filename0: (groupid=0, jobs=1): err= 0: pid=2538136: Thu Oct 17 17:00:49 2024 00:32:37.608 read: IOPS=485, BW=1941KiB/s (1987kB/s)(19.2MiB/10153msec) 00:32:37.608 slat (usec): min=8, max=138, avg=31.08, stdev=22.59 00:32:37.608 clat (msec): min=17, max=197, avg=32.70, stdev=10.66 00:32:37.608 lat (msec): min=17, max=197, avg=32.74, stdev=10.66 00:32:37.608 clat percentiles (msec): 00:32:37.608 | 1.00th=[ 21], 5.00th=[ 21], 10.00th=[ 22], 20.00th=[ 31], 00:32:37.608 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.608 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 38], 00:32:37.608 | 99.00th=[ 48], 99.50th=[ 63], 99.90th=[ 197], 99.95th=[ 197], 00:32:37.608 | 99.99th=[ 199] 00:32:37.608 bw ( KiB/s): min= 1667, max= 2336, per=4.45%, avg=1964.15, stdev=189.91, samples=20 00:32:37.608 iops : min= 416, max= 584, avg=491.00, stdev=47.54, samples=20 00:32:37.608 lat (msec) : 20=0.08%, 50=99.07%, 100=0.53%, 250=0.32% 00:32:37.608 cpu : usr=97.36%, sys=1.61%, ctx=177, majf=0, minf=53 00:32:37.608 IO depths : 1=3.7%, 2=7.9%, 4=18.3%, 8=60.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:32:37.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.608 complete : 0=0.0%, 4=92.3%, 8=2.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.608 issued rwts: total=4926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.608 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.608 filename0: (groupid=0, jobs=1): err= 0: pid=2538137: Thu Oct 17 17:00:49 2024 00:32:37.608 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10014msec) 00:32:37.608 slat (usec): min=8, max=121, avg=49.87, stdev=25.22 00:32:37.608 clat (usec): min=21350, max=57756, avg=33852.24, stdev=1848.40 00:32:37.608 lat (usec): min=21400, max=57779, avg=33902.11, stdev=1845.14 00:32:37.608 clat percentiles (usec): 00:32:37.608 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:32:37.608 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:32:37.608 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:32:37.608 | 99.00th=[39584], 99.50th=[43779], 99.90th=[57934], 99.95th=[57934], 00:32:37.608 | 99.99th=[57934] 00:32:37.608 bw ( KiB/s): min= 1792, max= 1920, per=4.21%, avg=1862.40, stdev=65.33, samples=20 00:32:37.608 iops : min= 448, max= 480, avg=465.60, stdev=16.33, samples=20 00:32:37.608 lat (msec) : 50=99.66%, 100=0.34% 00:32:37.608 cpu : usr=97.20%, sys=1.80%, ctx=134, majf=0, minf=51 00:32:37.608 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:37.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.608 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.608 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.608 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.608 filename0: (groupid=0, jobs=1): err= 0: pid=2538138: Thu Oct 17 17:00:49 2024 00:32:37.608 read: IOPS=458, BW=1834KiB/s (1878kB/s)(18.2MiB/10154msec) 00:32:37.608 slat (usec): min=13, max=130, avg=47.94, stdev=17.85 00:32:37.608 clat (msec): min=32, max=197, avg=34.43, stdev= 9.61 00:32:37.608 lat (msec): min=32, max=197, avg=34.48, stdev= 9.61 00:32:37.608 clat percentiles (msec): 00:32:37.608 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.608 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.608 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.608 | 99.00th=[ 47], 99.50th=[ 57], 99.90th=[ 197], 99.95th=[ 197], 00:32:37.608 | 99.99th=[ 197] 00:32:37.608 bw ( KiB/s): min= 1664, max= 1920, per=4.20%, avg=1856.00, stdev=77.69, samples=20 00:32:37.608 iops : min= 416, max= 480, avg=464.00, stdev=19.42, samples=20 00:32:37.608 lat (msec) : 50=99.31%, 100=0.34%, 250=0.34% 00:32:37.608 cpu : usr=98.44%, sys=1.10%, ctx=24, majf=0, minf=38 00:32:37.608 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:37.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.608 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.608 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.608 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.608 filename0: (groupid=0, jobs=1): err= 0: pid=2538139: Thu Oct 17 17:00:49 2024 00:32:37.608 read: IOPS=468, BW=1875KiB/s (1920kB/s)(18.3MiB/10002msec) 00:32:37.608 slat (usec): min=7, max=169, avg=46.28, stdev=33.51 00:32:37.608 clat (usec): min=19852, max=46678, avg=33711.01, stdev=1611.67 00:32:37.608 lat (usec): min=19913, max=46730, avg=33757.29, stdev=1609.03 00:32:37.608 clat percentiles (usec): 00:32:37.608 | 1.00th=[27919], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:32:37.608 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:32:37.608 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:32:37.608 | 99.00th=[40109], 99.50th=[40633], 99.90th=[46400], 99.95th=[46400], 00:32:37.608 | 99.99th=[46924] 00:32:37.608 bw ( KiB/s): min= 1792, max= 1920, per=4.24%, avg=1872.84, stdev=63.44, samples=19 00:32:37.609 iops : min= 448, max= 480, avg=468.21, stdev=15.86, samples=19 00:32:37.609 lat (msec) : 20=0.06%, 50=99.94% 00:32:37.609 cpu : usr=97.02%, sys=1.99%, ctx=154, majf=0, minf=64 00:32:37.609 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:37.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.609 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.609 filename0: (groupid=0, jobs=1): err= 0: pid=2538140: Thu Oct 17 17:00:49 2024 00:32:37.609 read: IOPS=460, BW=1842KiB/s (1887kB/s)(18.3MiB/10178msec) 00:32:37.609 slat (nsec): min=13702, max=95669, avg=45020.69, stdev=15901.63 00:32:37.609 clat (msec): min=20, max=195, avg=34.37, stdev= 9.49 00:32:37.609 lat (msec): min=20, max=195, avg=34.42, stdev= 9.49 00:32:37.609 clat percentiles (msec): 00:32:37.609 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.609 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.609 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.609 | 99.00th=[ 41], 99.50th=[ 47], 99.90th=[ 197], 99.95th=[ 197], 00:32:37.609 | 99.99th=[ 197] 00:32:37.609 bw ( KiB/s): min= 1792, max= 1920, per=4.23%, avg=1868.80, stdev=64.34, samples=20 00:32:37.609 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:32:37.609 lat (msec) : 50=99.66%, 250=0.34% 00:32:37.609 cpu : usr=98.41%, sys=1.17%, ctx=22, majf=0, minf=40 00:32:37.609 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:37.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.609 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.609 filename0: (groupid=0, jobs=1): err= 0: pid=2538141: Thu Oct 17 17:00:49 2024 00:32:37.609 read: IOPS=457, BW=1831KiB/s (1875kB/s)(18.1MiB/10137msec) 00:32:37.609 slat (usec): min=8, max=108, avg=39.56, stdev=18.92 00:32:37.609 clat (msec): min=27, max=189, avg=34.59, stdev= 9.59 00:32:37.609 lat (msec): min=27, max=189, avg=34.63, stdev= 9.59 00:32:37.609 clat percentiles (msec): 00:32:37.609 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.609 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.609 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.609 | 99.00th=[ 44], 99.50th=[ 84], 99.90th=[ 190], 99.95th=[ 190], 00:32:37.609 | 99.99th=[ 190] 00:32:37.609 bw ( KiB/s): min= 1539, max= 1920, per=4.19%, avg=1849.75, stdev=96.66, samples=20 00:32:37.609 iops : min= 384, max= 480, avg=462.40, stdev=24.29, samples=20 00:32:37.609 lat (msec) : 50=99.31%, 100=0.34%, 250=0.34% 00:32:37.609 cpu : usr=98.40%, sys=1.19%, ctx=16, majf=0, minf=42 00:32:37.609 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:37.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.609 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.609 filename1: (groupid=0, jobs=1): err= 0: pid=2538142: Thu Oct 17 17:00:49 2024 00:32:37.609 read: IOPS=458, BW=1835KiB/s (1879kB/s)(18.2MiB/10151msec) 00:32:37.609 slat (usec): min=6, max=109, avg=46.49, stdev=16.63 00:32:37.609 clat (msec): min=27, max=195, avg=34.45, stdev= 9.59 00:32:37.609 lat (msec): min=27, max=195, avg=34.50, stdev= 9.59 00:32:37.609 clat percentiles (msec): 00:32:37.609 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.609 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.609 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.609 | 99.00th=[ 46], 99.50th=[ 57], 99.90th=[ 197], 99.95th=[ 197], 00:32:37.609 | 99.99th=[ 197] 00:32:37.609 bw ( KiB/s): min= 1664, max= 1920, per=4.20%, avg=1856.00, stdev=77.69, samples=20 00:32:37.609 iops : min= 416, max= 480, avg=464.00, stdev=19.42, samples=20 00:32:37.609 lat (msec) : 50=99.31%, 100=0.34%, 250=0.34% 00:32:37.609 cpu : usr=97.43%, sys=1.78%, ctx=105, majf=0, minf=38 00:32:37.609 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:37.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.609 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.609 filename1: (groupid=0, jobs=1): err= 0: pid=2538143: Thu Oct 17 17:00:49 2024 00:32:37.609 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.3MiB/10181msec) 00:32:37.609 slat (usec): min=10, max=109, avg=42.61, stdev=18.68 00:32:37.609 clat (msec): min=20, max=197, avg=34.40, stdev= 9.49 00:32:37.609 lat (msec): min=20, max=197, avg=34.44, stdev= 9.49 00:32:37.609 clat percentiles (msec): 00:32:37.609 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.609 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.609 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.609 | 99.00th=[ 41], 99.50th=[ 47], 99.90th=[ 194], 99.95th=[ 194], 00:32:37.609 | 99.99th=[ 199] 00:32:37.609 bw ( KiB/s): min= 1792, max= 1920, per=4.23%, avg=1868.80, stdev=64.34, samples=20 00:32:37.609 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:32:37.609 lat (msec) : 50=99.66%, 250=0.34% 00:32:37.609 cpu : usr=98.44%, sys=1.11%, ctx=34, majf=0, minf=41 00:32:37.609 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:37.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.609 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.609 filename1: (groupid=0, jobs=1): err= 0: pid=2538144: Thu Oct 17 17:00:49 2024 00:32:37.609 read: IOPS=458, BW=1835KiB/s (1879kB/s)(18.2MiB/10151msec) 00:32:37.609 slat (usec): min=12, max=116, avg=51.11, stdev=17.22 00:32:37.609 clat (msec): min=28, max=195, avg=34.42, stdev= 9.60 00:32:37.609 lat (msec): min=28, max=195, avg=34.47, stdev= 9.60 00:32:37.609 clat percentiles (msec): 00:32:37.609 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.609 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.609 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.609 | 99.00th=[ 45], 99.50th=[ 57], 99.90th=[ 197], 99.95th=[ 197], 00:32:37.609 | 99.99th=[ 197] 00:32:37.609 bw ( KiB/s): min= 1664, max= 1920, per=4.20%, avg=1856.00, stdev=77.69, samples=20 00:32:37.609 iops : min= 416, max= 480, avg=464.00, stdev=19.42, samples=20 00:32:37.609 lat (msec) : 50=99.31%, 100=0.34%, 250=0.34% 00:32:37.609 cpu : usr=98.25%, sys=1.33%, ctx=21, majf=0, minf=52 00:32:37.609 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:37.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.609 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.609 filename1: (groupid=0, jobs=1): err= 0: pid=2538145: Thu Oct 17 17:00:49 2024 00:32:37.609 read: IOPS=460, BW=1842KiB/s (1887kB/s)(18.3MiB/10178msec) 00:32:37.609 slat (usec): min=11, max=134, avg=55.13, stdev=22.45 00:32:37.609 clat (msec): min=20, max=195, avg=34.26, stdev= 9.50 00:32:37.609 lat (msec): min=20, max=195, avg=34.31, stdev= 9.50 00:32:37.609 clat percentiles (msec): 00:32:37.609 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.609 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.609 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.609 | 99.00th=[ 41], 99.50th=[ 47], 99.90th=[ 197], 99.95th=[ 197], 00:32:37.609 | 99.99th=[ 197] 00:32:37.609 bw ( KiB/s): min= 1792, max= 1920, per=4.23%, avg=1868.80, stdev=64.34, samples=20 00:32:37.609 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:32:37.609 lat (msec) : 50=99.66%, 250=0.34% 00:32:37.609 cpu : usr=97.27%, sys=1.70%, ctx=147, majf=0, minf=56 00:32:37.609 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:37.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.609 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.609 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.609 filename1: (groupid=0, jobs=1): err= 0: pid=2538146: Thu Oct 17 17:00:49 2024 00:32:37.609 read: IOPS=459, BW=1838KiB/s (1882kB/s)(18.2MiB/10169msec) 00:32:37.609 slat (usec): min=9, max=127, avg=55.72, stdev=19.33 00:32:37.610 clat (msec): min=27, max=195, avg=34.33, stdev= 9.50 00:32:37.610 lat (msec): min=27, max=195, avg=34.39, stdev= 9.50 00:32:37.610 clat percentiles (msec): 00:32:37.610 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.610 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.610 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.610 | 99.00th=[ 42], 99.50th=[ 47], 99.90th=[ 197], 99.95th=[ 197], 00:32:37.610 | 99.99th=[ 197] 00:32:37.610 bw ( KiB/s): min= 1664, max= 1920, per=4.21%, avg=1859.25, stdev=81.60, samples=20 00:32:37.610 iops : min= 416, max= 480, avg=464.80, stdev=20.42, samples=20 00:32:37.610 lat (msec) : 50=99.66%, 250=0.34% 00:32:37.610 cpu : usr=97.39%, sys=1.68%, ctx=94, majf=0, minf=46 00:32:37.610 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:37.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.610 filename1: (groupid=0, jobs=1): err= 0: pid=2538147: Thu Oct 17 17:00:49 2024 00:32:37.610 read: IOPS=480, BW=1920KiB/s (1966kB/s)(19.1MiB/10184msec) 00:32:37.610 slat (usec): min=8, max=162, avg=34.60, stdev=18.29 00:32:37.610 clat (msec): min=9, max=188, avg=33.09, stdev= 9.80 00:32:37.610 lat (msec): min=9, max=188, avg=33.12, stdev= 9.80 00:32:37.610 clat percentiles (msec): 00:32:37.610 | 1.00th=[ 19], 5.00th=[ 22], 10.00th=[ 28], 20.00th=[ 34], 00:32:37.610 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.610 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.610 | 99.00th=[ 41], 99.50th=[ 45], 99.90th=[ 188], 99.95th=[ 188], 00:32:37.610 | 99.99th=[ 188] 00:32:37.610 bw ( KiB/s): min= 1792, max= 2400, per=4.41%, avg=1949.20, stdev=185.45, samples=20 00:32:37.610 iops : min= 448, max= 600, avg=487.30, stdev=46.36, samples=20 00:32:37.610 lat (msec) : 10=0.14%, 20=3.42%, 50=96.11%, 250=0.33% 00:32:37.610 cpu : usr=98.41%, sys=1.13%, ctx=25, majf=0, minf=59 00:32:37.610 IO depths : 1=5.1%, 2=10.1%, 4=21.2%, 8=55.9%, 16=7.6%, 32=0.0%, >=64=0.0% 00:32:37.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 issued rwts: total=4889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.610 filename1: (groupid=0, jobs=1): err= 0: pid=2538148: Thu Oct 17 17:00:49 2024 00:32:37.610 read: IOPS=458, BW=1835KiB/s (1879kB/s)(18.2MiB/10151msec) 00:32:37.610 slat (usec): min=8, max=154, avg=54.81, stdev=25.00 00:32:37.610 clat (msec): min=31, max=195, avg=34.35, stdev= 9.61 00:32:37.610 lat (msec): min=31, max=195, avg=34.40, stdev= 9.61 00:32:37.610 clat percentiles (msec): 00:32:37.610 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:37.610 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.610 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.610 | 99.00th=[ 46], 99.50th=[ 58], 99.90th=[ 197], 99.95th=[ 197], 00:32:37.610 | 99.99th=[ 197] 00:32:37.610 bw ( KiB/s): min= 1664, max= 1920, per=4.20%, avg=1856.00, stdev=77.69, samples=20 00:32:37.610 iops : min= 416, max= 480, avg=464.00, stdev=19.42, samples=20 00:32:37.610 lat (msec) : 50=99.31%, 100=0.34%, 250=0.34% 00:32:37.610 cpu : usr=98.39%, sys=1.18%, ctx=24, majf=0, minf=30 00:32:37.610 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:37.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.610 filename1: (groupid=0, jobs=1): err= 0: pid=2538149: Thu Oct 17 17:00:49 2024 00:32:37.610 read: IOPS=460, BW=1843KiB/s (1887kB/s)(18.3MiB/10177msec) 00:32:37.610 slat (usec): min=8, max=107, avg=27.12, stdev=17.14 00:32:37.610 clat (msec): min=20, max=194, avg=34.53, stdev= 9.46 00:32:37.610 lat (msec): min=20, max=194, avg=34.56, stdev= 9.46 00:32:37.610 clat percentiles (msec): 00:32:37.610 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.610 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.610 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.610 | 99.00th=[ 41], 99.50th=[ 47], 99.90th=[ 194], 99.95th=[ 194], 00:32:37.610 | 99.99th=[ 194] 00:32:37.610 bw ( KiB/s): min= 1792, max= 1920, per=4.23%, avg=1868.80, stdev=64.34, samples=20 00:32:37.610 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:32:37.610 lat (msec) : 50=99.66%, 250=0.34% 00:32:37.610 cpu : usr=98.32%, sys=1.21%, ctx=24, majf=0, minf=67 00:32:37.610 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:37.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.610 filename2: (groupid=0, jobs=1): err= 0: pid=2538150: Thu Oct 17 17:00:49 2024 00:32:37.610 read: IOPS=460, BW=1842KiB/s (1887kB/s)(18.3MiB/10178msec) 00:32:37.610 slat (usec): min=10, max=110, avg=41.24, stdev=17.18 00:32:37.610 clat (msec): min=18, max=194, avg=34.41, stdev= 9.48 00:32:37.610 lat (msec): min=18, max=195, avg=34.46, stdev= 9.48 00:32:37.610 clat percentiles (msec): 00:32:37.610 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.610 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.610 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.610 | 99.00th=[ 41], 99.50th=[ 47], 99.90th=[ 194], 99.95th=[ 194], 00:32:37.610 | 99.99th=[ 194] 00:32:37.610 bw ( KiB/s): min= 1792, max= 1920, per=4.23%, avg=1868.80, stdev=64.34, samples=20 00:32:37.610 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:32:37.610 lat (msec) : 20=0.04%, 50=99.62%, 250=0.34% 00:32:37.610 cpu : usr=97.08%, sys=1.85%, ctx=350, majf=0, minf=40 00:32:37.610 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:37.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.610 filename2: (groupid=0, jobs=1): err= 0: pid=2538151: Thu Oct 17 17:00:49 2024 00:32:37.610 read: IOPS=461, BW=1845KiB/s (1889kB/s)(18.3MiB/10165msec) 00:32:37.610 slat (nsec): min=8491, max=59886, avg=29538.65, stdev=10491.73 00:32:37.610 clat (msec): min=21, max=188, avg=34.43, stdev= 9.11 00:32:37.610 lat (msec): min=21, max=188, avg=34.46, stdev= 9.11 00:32:37.610 clat percentiles (msec): 00:32:37.610 | 1.00th=[ 30], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.610 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.610 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.610 | 99.00th=[ 41], 99.50th=[ 45], 99.90th=[ 188], 99.95th=[ 188], 00:32:37.610 | 99.99th=[ 188] 00:32:37.610 bw ( KiB/s): min= 1792, max= 1920, per=4.23%, avg=1868.80, stdev=64.34, samples=20 00:32:37.610 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:32:37.610 lat (msec) : 50=99.66%, 250=0.34% 00:32:37.610 cpu : usr=97.13%, sys=1.96%, ctx=77, majf=0, minf=66 00:32:37.610 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:37.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.610 filename2: (groupid=0, jobs=1): err= 0: pid=2538152: Thu Oct 17 17:00:49 2024 00:32:37.610 read: IOPS=458, BW=1834KiB/s (1878kB/s)(18.2MiB/10153msec) 00:32:37.610 slat (usec): min=8, max=148, avg=48.38, stdev=21.95 00:32:37.610 clat (msec): min=32, max=195, avg=34.40, stdev= 9.62 00:32:37.610 lat (msec): min=32, max=195, avg=34.45, stdev= 9.61 00:32:37.610 clat percentiles (msec): 00:32:37.610 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.610 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.610 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.610 | 99.00th=[ 46], 99.50th=[ 59], 99.90th=[ 197], 99.95th=[ 197], 00:32:37.610 | 99.99th=[ 197] 00:32:37.610 bw ( KiB/s): min= 1664, max= 1920, per=4.20%, avg=1855.60, stdev=78.06, samples=20 00:32:37.610 iops : min= 416, max= 480, avg=463.90, stdev=19.51, samples=20 00:32:37.610 lat (msec) : 50=99.31%, 100=0.34%, 250=0.34% 00:32:37.610 cpu : usr=98.40%, sys=1.17%, ctx=14, majf=0, minf=48 00:32:37.610 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:37.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.611 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.611 filename2: (groupid=0, jobs=1): err= 0: pid=2538153: Thu Oct 17 17:00:49 2024 00:32:37.611 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.3MiB/10181msec) 00:32:37.611 slat (usec): min=14, max=106, avg=47.88, stdev=15.94 00:32:37.611 clat (msec): min=20, max=197, avg=34.32, stdev= 9.51 00:32:37.611 lat (msec): min=20, max=197, avg=34.37, stdev= 9.51 00:32:37.611 clat percentiles (msec): 00:32:37.611 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.611 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.611 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.611 | 99.00th=[ 41], 99.50th=[ 47], 99.90th=[ 197], 99.95th=[ 197], 00:32:37.611 | 99.99th=[ 199] 00:32:37.611 bw ( KiB/s): min= 1792, max= 1920, per=4.23%, avg=1868.80, stdev=64.34, samples=20 00:32:37.611 iops : min= 448, max= 480, avg=467.20, stdev=16.08, samples=20 00:32:37.611 lat (msec) : 50=99.66%, 250=0.34% 00:32:37.611 cpu : usr=98.33%, sys=1.25%, ctx=21, majf=0, minf=44 00:32:37.611 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:37.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.611 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.611 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.611 filename2: (groupid=0, jobs=1): err= 0: pid=2538154: Thu Oct 17 17:00:49 2024 00:32:37.611 read: IOPS=458, BW=1836KiB/s (1880kB/s)(18.2MiB/10146msec) 00:32:37.611 slat (usec): min=8, max=100, avg=33.79, stdev=13.86 00:32:37.611 clat (msec): min=32, max=189, avg=34.56, stdev= 9.22 00:32:37.611 lat (msec): min=32, max=189, avg=34.59, stdev= 9.22 00:32:37.611 clat percentiles (msec): 00:32:37.611 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:32:37.611 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.611 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.611 | 99.00th=[ 44], 99.50th=[ 58], 99.90th=[ 190], 99.95th=[ 190], 00:32:37.611 | 99.99th=[ 190] 00:32:37.611 bw ( KiB/s): min= 1664, max= 1920, per=4.20%, avg=1854.55, stdev=79.21, samples=20 00:32:37.611 iops : min= 416, max= 480, avg=463.60, stdev=19.85, samples=20 00:32:37.611 lat (msec) : 50=99.31%, 100=0.34%, 250=0.34% 00:32:37.611 cpu : usr=98.29%, sys=1.30%, ctx=16, majf=0, minf=46 00:32:37.611 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:37.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.611 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.611 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.611 filename2: (groupid=0, jobs=1): err= 0: pid=2538155: Thu Oct 17 17:00:49 2024 00:32:37.611 read: IOPS=459, BW=1840KiB/s (1884kB/s)(18.3MiB/10194msec) 00:32:37.611 slat (usec): min=4, max=114, avg=50.94, stdev=32.23 00:32:37.611 clat (msec): min=21, max=199, avg=34.30, stdev= 9.18 00:32:37.611 lat (msec): min=21, max=199, avg=34.35, stdev= 9.18 00:32:37.611 clat percentiles (msec): 00:32:37.611 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:37.611 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.611 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.611 | 99.00th=[ 42], 99.50th=[ 45], 99.90th=[ 188], 99.95th=[ 188], 00:32:37.611 | 99.99th=[ 201] 00:32:37.611 bw ( KiB/s): min= 1788, max= 1920, per=4.23%, avg=1868.60, stdev=64.59, samples=20 00:32:37.611 iops : min= 447, max= 480, avg=467.15, stdev=16.15, samples=20 00:32:37.611 lat (msec) : 50=99.66%, 250=0.34% 00:32:37.611 cpu : usr=98.37%, sys=1.18%, ctx=20, majf=0, minf=79 00:32:37.611 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:37.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.611 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.611 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.611 filename2: (groupid=0, jobs=1): err= 0: pid=2538156: Thu Oct 17 17:00:49 2024 00:32:37.611 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10007msec) 00:32:37.611 slat (nsec): min=3992, max=55164, avg=25076.83, stdev=9881.53 00:32:37.611 clat (usec): min=22159, max=58042, avg=34037.02, stdev=1890.46 00:32:37.611 lat (usec): min=22170, max=58070, avg=34062.10, stdev=1890.94 00:32:37.611 clat percentiles (usec): 00:32:37.611 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:32:37.611 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:32:37.611 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:32:37.611 | 99.00th=[40633], 99.50th=[43779], 99.90th=[57934], 99.95th=[57934], 00:32:37.611 | 99.99th=[57934] 00:32:37.611 bw ( KiB/s): min= 1792, max= 1920, per=4.22%, avg=1866.11, stdev=64.93, samples=19 00:32:37.611 iops : min= 448, max= 480, avg=466.53, stdev=16.23, samples=19 00:32:37.611 lat (msec) : 50=99.66%, 100=0.34% 00:32:37.611 cpu : usr=97.77%, sys=1.63%, ctx=57, majf=0, minf=43 00:32:37.611 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:37.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.611 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.611 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.611 filename2: (groupid=0, jobs=1): err= 0: pid=2538157: Thu Oct 17 17:00:49 2024 00:32:37.611 read: IOPS=458, BW=1834KiB/s (1878kB/s)(18.2MiB/10153msec) 00:32:37.611 slat (usec): min=10, max=137, avg=53.88, stdev=22.21 00:32:37.611 clat (msec): min=30, max=196, avg=34.40, stdev= 9.57 00:32:37.611 lat (msec): min=30, max=196, avg=34.45, stdev= 9.57 00:32:37.611 clat percentiles (msec): 00:32:37.611 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:32:37.611 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:32:37.611 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:32:37.611 | 99.00th=[ 47], 99.50th=[ 57], 99.90th=[ 194], 99.95th=[ 194], 00:32:37.611 | 99.99th=[ 197] 00:32:37.611 bw ( KiB/s): min= 1664, max= 1920, per=4.20%, avg=1856.00, stdev=77.69, samples=20 00:32:37.611 iops : min= 416, max= 480, avg=464.00, stdev=19.42, samples=20 00:32:37.611 lat (msec) : 50=99.31%, 100=0.34%, 250=0.34% 00:32:37.611 cpu : usr=97.30%, sys=1.81%, ctx=153, majf=0, minf=61 00:32:37.611 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:37.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.611 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.611 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:37.611 00:32:37.611 Run status group 0 (all jobs): 00:32:37.611 READ: bw=43.1MiB/s (45.2MB/s), 1831KiB/s-1941KiB/s (1875kB/s-1987kB/s), io=440MiB (461MB), run=10002-10194msec 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.611 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 bdev_null0 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 [2024-10-17 17:00:50.248910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 bdev_null1 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:37.612 { 00:32:37.612 "params": { 00:32:37.612 "name": "Nvme$subsystem", 00:32:37.612 "trtype": "$TEST_TRANSPORT", 00:32:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:37.612 "adrfam": "ipv4", 00:32:37.612 "trsvcid": "$NVMF_PORT", 00:32:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:37.612 "hdgst": ${hdgst:-false}, 00:32:37.612 "ddgst": ${ddgst:-false} 00:32:37.612 }, 00:32:37.612 "method": "bdev_nvme_attach_controller" 00:32:37.612 } 00:32:37.612 EOF 00:32:37.612 )") 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:37.612 { 00:32:37.612 "params": { 00:32:37.612 "name": "Nvme$subsystem", 00:32:37.612 "trtype": "$TEST_TRANSPORT", 00:32:37.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:37.612 "adrfam": "ipv4", 00:32:37.612 "trsvcid": "$NVMF_PORT", 00:32:37.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:37.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:37.612 "hdgst": ${hdgst:-false}, 00:32:37.612 "ddgst": ${ddgst:-false} 00:32:37.612 }, 00:32:37.612 "method": "bdev_nvme_attach_controller" 00:32:37.612 } 00:32:37.612 EOF 00:32:37.612 )") 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:32:37.612 17:00:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:32:37.613 17:00:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:37.613 "params": { 00:32:37.613 "name": "Nvme0", 00:32:37.613 "trtype": "tcp", 00:32:37.613 "traddr": "10.0.0.2", 00:32:37.613 "adrfam": "ipv4", 00:32:37.613 "trsvcid": "4420", 00:32:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:37.613 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:37.613 "hdgst": false, 00:32:37.613 "ddgst": false 00:32:37.613 }, 00:32:37.613 "method": "bdev_nvme_attach_controller" 00:32:37.613 },{ 00:32:37.613 "params": { 00:32:37.613 "name": "Nvme1", 00:32:37.613 "trtype": "tcp", 00:32:37.613 "traddr": "10.0.0.2", 00:32:37.613 "adrfam": "ipv4", 00:32:37.613 "trsvcid": "4420", 00:32:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:37.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:37.613 "hdgst": false, 00:32:37.613 "ddgst": false 00:32:37.613 }, 00:32:37.613 "method": "bdev_nvme_attach_controller" 00:32:37.613 }' 00:32:37.613 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:37.613 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:37.613 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:37.613 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:37.613 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:37.613 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:37.613 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:37.613 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:37.613 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:37.613 17:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:37.613 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:37.613 ... 00:32:37.613 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:37.613 ... 00:32:37.613 fio-3.35 00:32:37.613 Starting 4 threads 00:32:42.880 00:32:42.880 filename0: (groupid=0, jobs=1): err= 0: pid=2539443: Thu Oct 17 17:00:56 2024 00:32:42.880 read: IOPS=1803, BW=14.1MiB/s (14.8MB/s)(70.5MiB/5002msec) 00:32:42.880 slat (nsec): min=5581, max=78921, avg=15838.41, stdev=10035.70 00:32:42.880 clat (usec): min=831, max=8151, avg=4381.89, stdev=524.02 00:32:42.880 lat (usec): min=849, max=8180, avg=4397.73, stdev=524.72 00:32:42.880 clat percentiles (usec): 00:32:42.880 | 1.00th=[ 2737], 5.00th=[ 3523], 10.00th=[ 3818], 20.00th=[ 4080], 00:32:42.880 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4490], 00:32:42.880 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 5014], 00:32:42.880 | 99.00th=[ 6063], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[ 7635], 00:32:42.880 | 99.99th=[ 8160] 00:32:42.880 bw ( KiB/s): min=14080, max=15054, per=25.66%, avg=14425.40, stdev=298.03, samples=10 00:32:42.880 iops : min= 1760, max= 1881, avg=1803.10, stdev=37.08, samples=10 00:32:42.880 lat (usec) : 1000=0.04% 00:32:42.880 lat (msec) : 2=0.33%, 4=15.86%, 10=83.76% 00:32:42.880 cpu : usr=95.14%, sys=4.38%, ctx=8, majf=0, minf=0 00:32:42.880 IO depths : 1=0.6%, 2=13.7%, 4=58.4%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:42.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.880 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.880 issued rwts: total=9022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.880 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:42.880 filename0: (groupid=0, jobs=1): err= 0: pid=2539444: Thu Oct 17 17:00:56 2024 00:32:42.880 read: IOPS=1770, BW=13.8MiB/s (14.5MB/s)(69.2MiB/5002msec) 00:32:42.880 slat (nsec): min=5259, max=67307, avg=22035.41, stdev=9852.68 00:32:42.880 clat (usec): min=869, max=8123, avg=4439.16, stdev=623.74 00:32:42.881 lat (usec): min=890, max=8153, avg=4461.20, stdev=624.26 00:32:42.881 clat percentiles (usec): 00:32:42.881 | 1.00th=[ 2474], 5.00th=[ 3556], 10.00th=[ 3851], 20.00th=[ 4146], 00:32:42.881 | 30.00th=[ 4293], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4490], 00:32:42.881 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 5342], 00:32:42.881 | 99.00th=[ 6783], 99.50th=[ 7373], 99.90th=[ 7963], 99.95th=[ 7963], 00:32:42.881 | 99.99th=[ 8094] 00:32:42.881 bw ( KiB/s): min=13936, max=14848, per=25.18%, avg=14156.40, stdev=275.87, samples=10 00:32:42.881 iops : min= 1742, max= 1856, avg=1769.50, stdev=34.48, samples=10 00:32:42.881 lat (usec) : 1000=0.02% 00:32:42.881 lat (msec) : 2=0.47%, 4=13.56%, 10=85.94% 00:32:42.881 cpu : usr=96.32%, sys=3.16%, ctx=9, majf=0, minf=9 00:32:42.881 IO depths : 1=0.4%, 2=17.7%, 4=55.3%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:42.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.881 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.881 issued rwts: total=8854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.881 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:42.881 filename1: (groupid=0, jobs=1): err= 0: pid=2539445: Thu Oct 17 17:00:56 2024 00:32:42.881 read: IOPS=1711, BW=13.4MiB/s (14.0MB/s)(66.9MiB/5001msec) 00:32:42.881 slat (nsec): min=5575, max=79715, avg=21947.41, stdev=12708.26 00:32:42.881 clat (usec): min=697, max=8558, avg=4594.40, stdev=723.98 00:32:42.881 lat (usec): min=711, max=8579, avg=4616.35, stdev=722.83 00:32:42.881 clat percentiles (usec): 00:32:42.881 | 1.00th=[ 2704], 5.00th=[ 3785], 10.00th=[ 4047], 20.00th=[ 4293], 00:32:42.881 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:32:42.881 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5342], 95.00th=[ 5997], 00:32:42.881 | 99.00th=[ 7439], 99.50th=[ 7767], 99.90th=[ 8160], 99.95th=[ 8225], 00:32:42.881 | 99.99th=[ 8586] 00:32:42.881 bw ( KiB/s): min=12960, max=14496, per=24.33%, avg=13680.00, stdev=410.50, samples=9 00:32:42.881 iops : min= 1620, max= 1812, avg=1710.00, stdev=51.31, samples=9 00:32:42.881 lat (usec) : 750=0.01%, 1000=0.05% 00:32:42.881 lat (msec) : 2=0.55%, 4=8.14%, 10=91.25% 00:32:42.881 cpu : usr=95.74%, sys=3.78%, ctx=14, majf=0, minf=10 00:32:42.881 IO depths : 1=0.3%, 2=14.9%, 4=57.7%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:42.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.881 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.881 issued rwts: total=8558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.881 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:42.881 filename1: (groupid=0, jobs=1): err= 0: pid=2539446: Thu Oct 17 17:00:56 2024 00:32:42.881 read: IOPS=1744, BW=13.6MiB/s (14.3MB/s)(68.2MiB/5003msec) 00:32:42.881 slat (nsec): min=5287, max=79029, avg=20428.88, stdev=12857.62 00:32:42.881 clat (usec): min=741, max=8410, avg=4515.63, stdev=645.38 00:32:42.881 lat (usec): min=756, max=8454, avg=4536.06, stdev=645.16 00:32:42.881 clat percentiles (usec): 00:32:42.881 | 1.00th=[ 2671], 5.00th=[ 3720], 10.00th=[ 3982], 20.00th=[ 4178], 00:32:42.881 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:32:42.881 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 5014], 95.00th=[ 5604], 00:32:42.881 | 99.00th=[ 6980], 99.50th=[ 7504], 99.90th=[ 7963], 99.95th=[ 8029], 00:32:42.881 | 99.99th=[ 8455] 00:32:42.881 bw ( KiB/s): min=13584, max=14448, per=24.81%, avg=13948.80, stdev=276.80, samples=10 00:32:42.881 iops : min= 1698, max= 1806, avg=1743.60, stdev=34.60, samples=10 00:32:42.881 lat (usec) : 750=0.02%, 1000=0.03% 00:32:42.881 lat (msec) : 2=0.31%, 4=10.51%, 10=89.12% 00:32:42.881 cpu : usr=95.22%, sys=4.32%, ctx=10, majf=0, minf=0 00:32:42.881 IO depths : 1=0.4%, 2=14.2%, 4=58.6%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:42.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.881 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.881 issued rwts: total=8726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.881 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:42.881 00:32:42.881 Run status group 0 (all jobs): 00:32:42.881 READ: bw=54.9MiB/s (57.6MB/s), 13.4MiB/s-14.1MiB/s (14.0MB/s-14.8MB/s), io=275MiB (288MB), run=5001-5003msec 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.881 00:32:42.881 real 0m24.278s 00:32:42.881 user 4m35.970s 00:32:42.881 sys 0m6.277s 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:42.881 17:00:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.881 ************************************ 00:32:42.881 END TEST fio_dif_rand_params 00:32:42.881 ************************************ 00:32:43.140 17:00:56 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:43.140 17:00:56 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:43.140 17:00:56 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:43.140 17:00:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:43.140 ************************************ 00:32:43.140 START TEST fio_dif_digest 00:32:43.140 ************************************ 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:43.140 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:43.141 bdev_null0 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:43.141 [2024-10-17 17:00:56.636425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:43.141 { 00:32:43.141 "params": { 00:32:43.141 "name": "Nvme$subsystem", 00:32:43.141 "trtype": "$TEST_TRANSPORT", 00:32:43.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.141 "adrfam": "ipv4", 00:32:43.141 "trsvcid": "$NVMF_PORT", 00:32:43.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.141 "hdgst": ${hdgst:-false}, 00:32:43.141 "ddgst": ${ddgst:-false} 00:32:43.141 }, 00:32:43.141 "method": "bdev_nvme_attach_controller" 00:32:43.141 } 00:32:43.141 EOF 00:32:43.141 )") 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:43.141 "params": { 00:32:43.141 "name": "Nvme0", 00:32:43.141 "trtype": "tcp", 00:32:43.141 "traddr": "10.0.0.2", 00:32:43.141 "adrfam": "ipv4", 00:32:43.141 "trsvcid": "4420", 00:32:43.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:43.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:43.141 "hdgst": true, 00:32:43.141 "ddgst": true 00:32:43.141 }, 00:32:43.141 "method": "bdev_nvme_attach_controller" 00:32:43.141 }' 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:43.141 17:00:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:43.400 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:43.400 ... 00:32:43.400 fio-3.35 00:32:43.400 Starting 3 threads 00:32:55.604 00:32:55.604 filename0: (groupid=0, jobs=1): err= 0: pid=2540289: Thu Oct 17 17:01:07 2024 00:32:55.604 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10047msec) 00:32:55.604 slat (usec): min=4, max=125, avg=15.12, stdev= 4.22 00:32:55.604 clat (usec): min=11534, max=52166, avg=15076.50, stdev=1521.09 00:32:55.604 lat (usec): min=11549, max=52181, avg=15091.62, stdev=1521.25 00:32:55.604 clat percentiles (usec): 00:32:55.604 | 1.00th=[12518], 5.00th=[13173], 10.00th=[13698], 20.00th=[14222], 00:32:55.604 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:32:55.604 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16319], 95.00th=[16712], 00:32:55.604 | 99.00th=[17695], 99.50th=[17957], 99.90th=[47973], 99.95th=[52167], 00:32:55.604 | 99.99th=[52167] 00:32:55.604 bw ( KiB/s): min=24576, max=26880, per=34.51%, avg=25487.40, stdev=648.66, samples=20 00:32:55.604 iops : min= 192, max= 210, avg=199.10, stdev= 5.05, samples=20 00:32:55.604 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:32:55.604 cpu : usr=92.16%, sys=7.36%, ctx=22, majf=0, minf=184 00:32:55.604 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.604 issued rwts: total=1994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.604 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:55.604 filename0: (groupid=0, jobs=1): err= 0: pid=2540290: Thu Oct 17 17:01:07 2024 00:32:55.604 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(246MiB/10049msec) 00:32:55.604 slat (usec): min=4, max=113, avg=14.88, stdev= 4.13 00:32:55.604 clat (usec): min=11390, max=54941, avg=15298.95, stdev=1676.01 00:32:55.604 lat (usec): min=11404, max=54955, avg=15313.83, stdev=1676.11 00:32:55.604 clat percentiles (usec): 00:32:55.604 | 1.00th=[12780], 5.00th=[13435], 10.00th=[13829], 20.00th=[14353], 00:32:55.604 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:32:55.604 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:32:55.604 | 99.00th=[18220], 99.50th=[18744], 99.90th=[51119], 99.95th=[54789], 00:32:55.604 | 99.99th=[54789] 00:32:55.604 bw ( KiB/s): min=23808, max=27136, per=34.02%, avg=25126.40, stdev=868.24, samples=20 00:32:55.604 iops : min= 186, max= 212, avg=196.30, stdev= 6.78, samples=20 00:32:55.604 lat (msec) : 20=99.90%, 100=0.10% 00:32:55.604 cpu : usr=92.55%, sys=6.97%, ctx=33, majf=0, minf=88 00:32:55.604 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.604 issued rwts: total=1965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.604 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:55.604 filename0: (groupid=0, jobs=1): err= 0: pid=2540291: Thu Oct 17 17:01:07 2024 00:32:55.604 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(230MiB/10047msec) 00:32:55.604 slat (nsec): min=4230, max=38115, avg=14763.25, stdev=3176.01 00:32:55.604 clat (usec): min=12254, max=52860, avg=16348.47, stdev=1530.14 00:32:55.604 lat (usec): min=12268, max=52874, avg=16363.24, stdev=1530.16 00:32:55.604 clat percentiles (usec): 00:32:55.604 | 1.00th=[13960], 5.00th=[14615], 10.00th=[15008], 20.00th=[15533], 00:32:55.604 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16319], 60.00th=[16581], 00:32:55.604 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:32:55.604 | 99.00th=[19268], 99.50th=[19268], 99.90th=[46924], 99.95th=[52691], 00:32:55.604 | 99.99th=[52691] 00:32:55.604 bw ( KiB/s): min=22784, max=24832, per=31.82%, avg=23503.20, stdev=617.18, samples=20 00:32:55.604 iops : min= 178, max= 194, avg=183.60, stdev= 4.79, samples=20 00:32:55.604 lat (msec) : 20=99.89%, 50=0.05%, 100=0.05% 00:32:55.604 cpu : usr=92.88%, sys=6.65%, ctx=22, majf=0, minf=131 00:32:55.604 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.604 issued rwts: total=1839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.604 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:55.604 00:32:55.604 Run status group 0 (all jobs): 00:32:55.604 READ: bw=72.1MiB/s (75.6MB/s), 22.9MiB/s-24.8MiB/s (24.0MB/s-26.0MB/s), io=725MiB (760MB), run=10047-10049msec 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.604 00:32:55.604 real 0m11.187s 00:32:55.604 user 0m28.967s 00:32:55.604 sys 0m2.381s 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:55.604 17:01:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:55.604 ************************************ 00:32:55.604 END TEST fio_dif_digest 00:32:55.604 ************************************ 00:32:55.604 17:01:07 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:55.604 17:01:07 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:55.604 17:01:07 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:55.604 17:01:07 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:32:55.604 17:01:07 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:55.604 17:01:07 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:32:55.604 17:01:07 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:55.604 17:01:07 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:55.604 rmmod nvme_tcp 00:32:55.604 rmmod nvme_fabrics 00:32:55.604 rmmod nvme_keyring 00:32:55.604 17:01:07 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:55.604 17:01:07 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:32:55.604 17:01:07 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:32:55.604 17:01:07 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 2533959 ']' 00:32:55.604 17:01:07 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 2533959 00:32:55.604 17:01:07 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2533959 ']' 00:32:55.604 17:01:07 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2533959 00:32:55.604 17:01:07 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:32:55.604 17:01:07 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:55.604 17:01:07 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2533959 00:32:55.604 17:01:07 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:55.604 17:01:07 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:55.604 17:01:07 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2533959' 00:32:55.604 killing process with pid 2533959 00:32:55.604 17:01:07 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2533959 00:32:55.604 17:01:07 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2533959 00:32:55.604 17:01:08 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:32:55.604 17:01:08 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:55.864 Waiting for block devices as requested 00:32:55.864 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:55.864 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:56.123 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:56.123 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:56.123 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:56.123 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:56.383 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:56.383 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:56.383 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:56.641 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:56.641 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:56.641 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:56.641 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:56.900 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:56.900 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:56.900 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:56.900 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:57.159 17:01:10 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:57.159 17:01:10 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:57.159 17:01:10 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:32:57.159 17:01:10 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:32:57.159 17:01:10 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:57.159 17:01:10 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:32:57.159 17:01:10 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.159 17:01:10 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.159 17:01:10 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.159 17:01:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:57.159 17:01:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.060 17:01:12 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.060 00:32:59.060 real 1m7.370s 00:32:59.060 user 6m34.334s 00:32:59.060 sys 0m17.379s 00:32:59.060 17:01:12 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:59.060 17:01:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:59.060 ************************************ 00:32:59.060 END TEST nvmf_dif 00:32:59.060 ************************************ 00:32:59.060 17:01:12 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:59.060 17:01:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:59.060 17:01:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:59.060 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:32:59.060 ************************************ 00:32:59.060 START TEST nvmf_abort_qd_sizes 00:32:59.060 ************************************ 00:32:59.060 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:59.319 * Looking for test storage... 00:32:59.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:59.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.319 --rc genhtml_branch_coverage=1 00:32:59.319 --rc genhtml_function_coverage=1 00:32:59.319 --rc genhtml_legend=1 00:32:59.319 --rc geninfo_all_blocks=1 00:32:59.319 --rc geninfo_unexecuted_blocks=1 00:32:59.319 00:32:59.319 ' 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:59.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.319 --rc genhtml_branch_coverage=1 00:32:59.319 --rc genhtml_function_coverage=1 00:32:59.319 --rc genhtml_legend=1 00:32:59.319 --rc geninfo_all_blocks=1 00:32:59.319 --rc geninfo_unexecuted_blocks=1 00:32:59.319 00:32:59.319 ' 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:59.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.319 --rc genhtml_branch_coverage=1 00:32:59.319 --rc genhtml_function_coverage=1 00:32:59.319 --rc genhtml_legend=1 00:32:59.319 --rc geninfo_all_blocks=1 00:32:59.319 --rc geninfo_unexecuted_blocks=1 00:32:59.319 00:32:59.319 ' 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:59.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.319 --rc genhtml_branch_coverage=1 00:32:59.319 --rc genhtml_function_coverage=1 00:32:59.319 --rc genhtml_legend=1 00:32:59.319 --rc geninfo_all_blocks=1 00:32:59.319 --rc geninfo_unexecuted_blocks=1 00:32:59.319 00:32:59.319 ' 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:59.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:59.319 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:59.320 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.320 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:59.320 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:59.320 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:59.320 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.320 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:59.320 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.320 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:59.320 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:59.320 17:01:12 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:32:59.320 17:01:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:01.220 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:01.220 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:01.220 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:01.221 Found net devices under 0000:09:00.0: cvl_0_0 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:01.221 Found net devices under 0000:09:00.1: cvl_0_1 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:01.221 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:01.479 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:01.479 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:01.479 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:01.479 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:01.479 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:01.479 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:01.479 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:01.479 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:01.479 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:01.479 17:01:14 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:01.479 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:01.479 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:01.479 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:01.479 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:01.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:01.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:33:01.479 00:33:01.479 --- 10.0.0.2 ping statistics --- 00:33:01.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.479 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:33:01.479 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:01.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:01.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:33:01.479 00:33:01.479 --- 10.0.0.1 ping statistics --- 00:33:01.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.479 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:33:01.479 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:01.479 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:33:01.479 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:33:01.479 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:02.853 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:02.853 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:02.853 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:02.853 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:02.853 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:02.853 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:02.853 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:02.853 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:02.853 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:02.853 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:02.853 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:02.853 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:02.853 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:02.853 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:02.853 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:02.853 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:03.791 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=2545213 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 2545213 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2545213 ']' 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:03.791 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:04.049 [2024-10-17 17:01:17.487079] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:33:04.049 [2024-10-17 17:01:17.487155] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:04.049 [2024-10-17 17:01:17.556560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:04.049 [2024-10-17 17:01:17.622908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.049 [2024-10-17 17:01:17.622974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.049 [2024-10-17 17:01:17.622990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:04.049 [2024-10-17 17:01:17.623015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:04.049 [2024-10-17 17:01:17.623029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.049 [2024-10-17 17:01:17.624697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.049 [2024-10-17 17:01:17.624751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:04.049 [2024-10-17 17:01:17.624864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:04.049 [2024-10-17 17:01:17.624867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:04.308 17:01:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:04.308 ************************************ 00:33:04.308 START TEST spdk_target_abort 00:33:04.308 ************************************ 00:33:04.308 17:01:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:33:04.308 17:01:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:04.308 17:01:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:33:04.308 17:01:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.308 17:01:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:07.590 spdk_targetn1 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:07.590 [2024-10-17 17:01:20.652954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:07.590 [2024-10-17 17:01:20.696290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:07.590 17:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:10.883 Initializing NVMe Controllers 00:33:10.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:10.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:10.883 Initialization complete. Launching workers. 00:33:10.883 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12920, failed: 0 00:33:10.883 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1189, failed to submit 11731 00:33:10.883 success 773, unsuccessful 416, failed 0 00:33:10.883 17:01:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:10.883 17:01:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:14.224 Initializing NVMe Controllers 00:33:14.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:14.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:14.224 Initialization complete. Launching workers. 00:33:14.224 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8919, failed: 0 00:33:14.224 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1214, failed to submit 7705 00:33:14.224 success 349, unsuccessful 865, failed 0 00:33:14.224 17:01:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:14.224 17:01:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:16.755 Initializing NVMe Controllers 00:33:16.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:16.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:16.755 Initialization complete. Launching workers. 00:33:16.755 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31340, failed: 0 00:33:16.755 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2510, failed to submit 28830 00:33:16.756 success 552, unsuccessful 1958, failed 0 00:33:16.756 17:01:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:16.756 17:01:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.756 17:01:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:16.756 17:01:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.756 17:01:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:16.756 17:01:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.756 17:01:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:18.132 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.132 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2545213 00:33:18.132 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2545213 ']' 00:33:18.132 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2545213 00:33:18.132 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:33:18.132 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:18.132 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2545213 00:33:18.132 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:18.132 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:18.132 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2545213' 00:33:18.132 killing process with pid 2545213 00:33:18.132 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2545213 00:33:18.132 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2545213 00:33:18.391 00:33:18.391 real 0m14.073s 00:33:18.391 user 0m53.344s 00:33:18.391 sys 0m2.614s 00:33:18.391 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:18.391 17:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:18.391 ************************************ 00:33:18.391 END TEST spdk_target_abort 00:33:18.391 ************************************ 00:33:18.391 17:01:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:18.391 17:01:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:18.391 17:01:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:18.391 17:01:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:18.391 ************************************ 00:33:18.391 START TEST kernel_target_abort 00:33:18.391 ************************************ 00:33:18.391 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:33:18.391 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:18.391 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:33:18.391 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:18.392 17:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:19.768 Waiting for block devices as requested 00:33:19.768 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:19.768 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:19.768 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:19.768 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:20.027 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:20.027 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:20.027 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:20.027 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:20.286 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:33:20.286 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:20.286 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:20.545 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:20.545 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:20.545 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:20.545 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:20.545 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:20.805 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:20.805 No valid GPT data, bailing 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:20.805 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:33:21.065 00:33:21.065 Discovery Log Number of Records 2, Generation counter 2 00:33:21.065 =====Discovery Log Entry 0====== 00:33:21.065 trtype: tcp 00:33:21.065 adrfam: ipv4 00:33:21.065 subtype: current discovery subsystem 00:33:21.065 treq: not specified, sq flow control disable supported 00:33:21.065 portid: 1 00:33:21.065 trsvcid: 4420 00:33:21.065 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:21.065 traddr: 10.0.0.1 00:33:21.065 eflags: none 00:33:21.065 sectype: none 00:33:21.065 =====Discovery Log Entry 1====== 00:33:21.065 trtype: tcp 00:33:21.065 adrfam: ipv4 00:33:21.065 subtype: nvme subsystem 00:33:21.065 treq: not specified, sq flow control disable supported 00:33:21.065 portid: 1 00:33:21.065 trsvcid: 4420 00:33:21.065 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:21.065 traddr: 10.0.0.1 00:33:21.065 eflags: none 00:33:21.065 sectype: none 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:21.065 17:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:24.354 Initializing NVMe Controllers 00:33:24.354 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:24.354 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:24.354 Initialization complete. Launching workers. 00:33:24.354 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41573, failed: 0 00:33:24.354 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 41573, failed to submit 0 00:33:24.354 success 0, unsuccessful 41573, failed 0 00:33:24.354 17:01:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:24.354 17:01:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:27.644 Initializing NVMe Controllers 00:33:27.644 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:27.644 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:27.644 Initialization complete. Launching workers. 00:33:27.644 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79463, failed: 0 00:33:27.644 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18602, failed to submit 60861 00:33:27.644 success 0, unsuccessful 18602, failed 0 00:33:27.644 17:01:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:27.644 17:01:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:30.932 Initializing NVMe Controllers 00:33:30.932 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:30.932 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:30.932 Initialization complete. Launching workers. 00:33:30.932 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73183, failed: 0 00:33:30.932 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18270, failed to submit 54913 00:33:30.932 success 0, unsuccessful 18270, failed 0 00:33:30.932 17:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:30.932 17:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:30.932 17:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:33:30.932 17:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:30.932 17:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:30.932 17:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:30.932 17:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:30.932 17:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:33:30.932 17:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:33:30.932 17:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:31.500 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:31.500 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:31.500 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:31.500 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:31.500 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:31.500 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:31.500 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:31.500 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:31.500 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:31.500 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:31.500 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:31.500 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:31.758 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:31.758 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:31.758 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:31.758 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:32.693 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:33:32.693 00:33:32.693 real 0m14.361s 00:33:32.693 user 0m6.171s 00:33:32.693 sys 0m3.466s 00:33:32.693 17:01:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:32.693 17:01:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:32.693 ************************************ 00:33:32.693 END TEST kernel_target_abort 00:33:32.693 ************************************ 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:32.693 rmmod nvme_tcp 00:33:32.693 rmmod nvme_fabrics 00:33:32.693 rmmod nvme_keyring 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 2545213 ']' 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 2545213 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2545213 ']' 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2545213 00:33:32.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2545213) - No such process 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2545213 is not found' 00:33:32.693 Process with pid 2545213 is not found 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:33:32.693 17:01:46 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:34.066 Waiting for block devices as requested 00:33:34.066 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:34.066 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:34.066 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:34.066 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:34.066 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:34.325 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:34.325 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:34.325 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:34.325 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:33:34.583 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:34.583 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:34.583 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:34.842 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:34.842 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:34.842 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:35.101 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:35.101 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:35.101 17:01:48 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:35.101 17:01:48 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:35.101 17:01:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:33:35.101 17:01:48 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:33:35.101 17:01:48 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:35.101 17:01:48 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:33:35.101 17:01:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:35.101 17:01:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:35.101 17:01:48 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.101 17:01:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:35.101 17:01:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.635 17:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:37.635 00:33:37.635 real 0m38.046s 00:33:37.635 user 1m1.745s 00:33:37.635 sys 0m9.525s 00:33:37.635 17:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:37.635 17:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:37.635 ************************************ 00:33:37.635 END TEST nvmf_abort_qd_sizes 00:33:37.635 ************************************ 00:33:37.635 17:01:50 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:37.635 17:01:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:37.635 17:01:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:37.635 17:01:50 -- common/autotest_common.sh@10 -- # set +x 00:33:37.635 ************************************ 00:33:37.635 START TEST keyring_file 00:33:37.635 ************************************ 00:33:37.635 17:01:50 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:37.635 * Looking for test storage... 00:33:37.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:37.635 17:01:50 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:37.635 17:01:50 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:33:37.635 17:01:50 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:37.635 17:01:50 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:37.635 17:01:50 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:37.635 17:01:50 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:37.635 17:01:50 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:37.635 17:01:50 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:33:37.635 17:01:50 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:33:37.635 17:01:50 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:33:37.635 17:01:50 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:33:37.635 17:01:50 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:33:37.635 17:01:50 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:33:37.635 17:01:50 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@345 -- # : 1 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@353 -- # local d=1 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@355 -- # echo 1 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@353 -- # local d=2 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@355 -- # echo 2 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@368 -- # return 0 00:33:37.636 17:01:50 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:37.636 17:01:50 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:37.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.636 --rc genhtml_branch_coverage=1 00:33:37.636 --rc genhtml_function_coverage=1 00:33:37.636 --rc genhtml_legend=1 00:33:37.636 --rc geninfo_all_blocks=1 00:33:37.636 --rc geninfo_unexecuted_blocks=1 00:33:37.636 00:33:37.636 ' 00:33:37.636 17:01:50 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:37.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.636 --rc genhtml_branch_coverage=1 00:33:37.636 --rc genhtml_function_coverage=1 00:33:37.636 --rc genhtml_legend=1 00:33:37.636 --rc geninfo_all_blocks=1 00:33:37.636 --rc geninfo_unexecuted_blocks=1 00:33:37.636 00:33:37.636 ' 00:33:37.636 17:01:50 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:37.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.636 --rc genhtml_branch_coverage=1 00:33:37.636 --rc genhtml_function_coverage=1 00:33:37.636 --rc genhtml_legend=1 00:33:37.636 --rc geninfo_all_blocks=1 00:33:37.636 --rc geninfo_unexecuted_blocks=1 00:33:37.636 00:33:37.636 ' 00:33:37.636 17:01:50 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:37.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.636 --rc genhtml_branch_coverage=1 00:33:37.636 --rc genhtml_function_coverage=1 00:33:37.636 --rc genhtml_legend=1 00:33:37.636 --rc geninfo_all_blocks=1 00:33:37.636 --rc geninfo_unexecuted_blocks=1 00:33:37.636 00:33:37.636 ' 00:33:37.636 17:01:50 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:37.636 17:01:50 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.636 17:01:50 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.636 17:01:50 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.636 17:01:50 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.636 17:01:50 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.636 17:01:50 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:37.636 17:01:50 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@51 -- # : 0 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:37.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:37.636 17:01:50 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:37.636 17:01:50 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:37.636 17:01:50 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:37.636 17:01:50 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:37.636 17:01:50 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:37.636 17:01:50 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:37.636 17:01:50 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:37.636 17:01:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:37.636 17:01:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:37.636 17:01:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:37.636 17:01:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:37.636 17:01:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:37.636 17:01:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FjKYbwWSRx 00:33:37.636 17:01:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:33:37.636 17:01:50 keyring_file -- nvmf/common.sh@731 -- # python - 00:33:37.636 17:01:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FjKYbwWSRx 00:33:37.636 17:01:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FjKYbwWSRx 00:33:37.636 17:01:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FjKYbwWSRx 00:33:37.636 17:01:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:37.636 17:01:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:37.636 17:01:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:37.636 17:01:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:37.636 17:01:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:37.636 17:01:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:37.636 17:01:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rLFI9VJAzM 00:33:37.636 17:01:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:37.636 17:01:51 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:37.636 17:01:51 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:33:37.636 17:01:51 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:33:37.636 17:01:51 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:33:37.636 17:01:51 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:33:37.636 17:01:51 keyring_file -- nvmf/common.sh@731 -- # python - 00:33:37.636 17:01:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rLFI9VJAzM 00:33:37.636 17:01:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rLFI9VJAzM 00:33:37.636 17:01:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.rLFI9VJAzM 00:33:37.636 17:01:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=2550983 00:33:37.636 17:01:51 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:37.636 17:01:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2550983 00:33:37.636 17:01:51 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2550983 ']' 00:33:37.636 17:01:51 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.636 17:01:51 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:37.636 17:01:51 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.636 17:01:51 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:37.636 17:01:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:37.636 [2024-10-17 17:01:51.123336] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:33:37.636 [2024-10-17 17:01:51.123437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550983 ] 00:33:37.636 [2024-10-17 17:01:51.184300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.636 [2024-10-17 17:01:51.247172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.895 17:01:51 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:37.895 17:01:51 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:37.895 17:01:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:37.895 17:01:51 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.895 17:01:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:37.895 [2024-10-17 17:01:51.533177] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.895 null0 00:33:37.895 [2024-10-17 17:01:51.565191] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:37.895 [2024-10-17 17:01:51.565731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:37.895 17:01:51 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.895 17:01:51 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:37.895 17:01:51 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:38.154 [2024-10-17 17:01:51.593250] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:38.154 request: 00:33:38.154 { 00:33:38.154 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:38.154 "secure_channel": false, 00:33:38.154 "listen_address": { 00:33:38.154 "trtype": "tcp", 00:33:38.154 "traddr": "127.0.0.1", 00:33:38.154 "trsvcid": "4420" 00:33:38.154 }, 00:33:38.154 "method": "nvmf_subsystem_add_listener", 00:33:38.154 "req_id": 1 00:33:38.154 } 00:33:38.154 Got JSON-RPC error response 00:33:38.154 response: 00:33:38.154 { 00:33:38.154 "code": -32602, 00:33:38.154 "message": "Invalid parameters" 00:33:38.154 } 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:38.154 17:01:51 keyring_file -- keyring/file.sh@47 -- # bperfpid=2550993 00:33:38.154 17:01:51 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2550993 /var/tmp/bperf.sock 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2550993 ']' 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:38.154 17:01:51 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:38.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:38.154 17:01:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:38.154 [2024-10-17 17:01:51.645249] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:33:38.154 [2024-10-17 17:01:51.645343] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550993 ] 00:33:38.154 [2024-10-17 17:01:51.702219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.154 [2024-10-17 17:01:51.762541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.412 17:01:51 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:38.412 17:01:51 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:38.412 17:01:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FjKYbwWSRx 00:33:38.412 17:01:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FjKYbwWSRx 00:33:38.670 17:01:52 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rLFI9VJAzM 00:33:38.670 17:01:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rLFI9VJAzM 00:33:38.928 17:01:52 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:33:38.928 17:01:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:38.928 17:01:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:38.928 17:01:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:38.928 17:01:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:39.185 17:01:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FjKYbwWSRx == \/\t\m\p\/\t\m\p\.\F\j\K\Y\b\w\W\S\R\x ]] 00:33:39.185 17:01:52 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:33:39.185 17:01:52 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:33:39.185 17:01:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:39.185 17:01:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.185 17:01:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:39.443 17:01:52 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.rLFI9VJAzM == \/\t\m\p\/\t\m\p\.\r\L\F\I\9\V\J\A\z\M ]] 00:33:39.443 17:01:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:33:39.443 17:01:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:39.443 17:01:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:39.443 17:01:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:39.443 17:01:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:39.443 17:01:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.702 17:01:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:39.702 17:01:53 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:33:39.702 17:01:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:39.702 17:01:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:39.702 17:01:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:39.702 17:01:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.702 17:01:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:39.960 17:01:53 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:33:39.960 17:01:53 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:39.960 17:01:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:40.218 [2024-10-17 17:01:53.815964] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:40.218 nvme0n1 00:33:40.218 17:01:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:33:40.218 17:01:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:40.218 17:01:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.218 17:01:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.218 17:01:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.218 17:01:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:40.783 17:01:54 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:33:40.783 17:01:54 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:33:40.783 17:01:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:40.783 17:01:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.783 17:01:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.783 17:01:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.783 17:01:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:40.783 17:01:54 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:33:40.783 17:01:54 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:41.041 Running I/O for 1 seconds... 00:33:42.043 9318.00 IOPS, 36.40 MiB/s 00:33:42.043 Latency(us) 00:33:42.043 [2024-10-17T15:01:55.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.043 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:42.043 nvme0n1 : 1.01 9368.27 36.59 0.00 0.00 13617.01 4174.89 18641.35 00:33:42.043 [2024-10-17T15:01:55.733Z] =================================================================================================================== 00:33:42.043 [2024-10-17T15:01:55.733Z] Total : 9368.27 36.59 0.00 0.00 13617.01 4174.89 18641.35 00:33:42.043 { 00:33:42.043 "results": [ 00:33:42.043 { 00:33:42.043 "job": "nvme0n1", 00:33:42.043 "core_mask": "0x2", 00:33:42.043 "workload": "randrw", 00:33:42.043 "percentage": 50, 00:33:42.043 "status": "finished", 00:33:42.043 "queue_depth": 128, 00:33:42.043 "io_size": 4096, 00:33:42.043 "runtime": 1.008297, 00:33:42.043 "iops": 9368.271451764707, 00:33:42.043 "mibps": 36.59481035845589, 00:33:42.043 "io_failed": 0, 00:33:42.043 "io_timeout": 0, 00:33:42.043 "avg_latency_us": 13617.009241771944, 00:33:42.043 "min_latency_us": 4174.885925925926, 00:33:42.043 "max_latency_us": 18641.35111111111 00:33:42.043 } 00:33:42.043 ], 00:33:42.043 "core_count": 1 00:33:42.043 } 00:33:42.043 17:01:55 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:42.043 17:01:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:42.301 17:01:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:33:42.301 17:01:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:42.301 17:01:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:42.301 17:01:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:42.301 17:01:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:42.301 17:01:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:42.560 17:01:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:42.560 17:01:56 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:33:42.560 17:01:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:42.560 17:01:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:42.560 17:01:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:42.560 17:01:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:42.560 17:01:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:42.819 17:01:56 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:33:42.819 17:01:56 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:42.819 17:01:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:42.819 17:01:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:42.819 17:01:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:42.819 17:01:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:42.819 17:01:56 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:42.819 17:01:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:42.819 17:01:56 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:42.819 17:01:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:43.078 [2024-10-17 17:01:56.695605] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:43.078 [2024-10-17 17:01:56.695792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb53b0 (107): Transport endpoint is not connected 00:33:43.078 [2024-10-17 17:01:56.696782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb53b0 (9): Bad file descriptor 00:33:43.078 [2024-10-17 17:01:56.697781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.078 [2024-10-17 17:01:56.697801] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:43.078 [2024-10-17 17:01:56.697830] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:43.078 [2024-10-17 17:01:56.697846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.078 request: 00:33:43.078 { 00:33:43.078 "name": "nvme0", 00:33:43.078 "trtype": "tcp", 00:33:43.078 "traddr": "127.0.0.1", 00:33:43.078 "adrfam": "ipv4", 00:33:43.078 "trsvcid": "4420", 00:33:43.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:43.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:43.078 "prchk_reftag": false, 00:33:43.078 "prchk_guard": false, 00:33:43.078 "hdgst": false, 00:33:43.078 "ddgst": false, 00:33:43.078 "psk": "key1", 00:33:43.078 "allow_unrecognized_csi": false, 00:33:43.078 "method": "bdev_nvme_attach_controller", 00:33:43.078 "req_id": 1 00:33:43.078 } 00:33:43.078 Got JSON-RPC error response 00:33:43.078 response: 00:33:43.078 { 00:33:43.078 "code": -5, 00:33:43.078 "message": "Input/output error" 00:33:43.078 } 00:33:43.078 17:01:56 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:43.078 17:01:56 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:43.078 17:01:56 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:43.078 17:01:56 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:43.078 17:01:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:33:43.078 17:01:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:43.078 17:01:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:43.078 17:01:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:43.078 17:01:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:43.078 17:01:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:43.335 17:01:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:43.335 17:01:56 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:33:43.335 17:01:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:43.335 17:01:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:43.335 17:01:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:43.335 17:01:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:43.335 17:01:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:43.658 17:01:57 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:33:43.658 17:01:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:33:43.658 17:01:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:43.935 17:01:57 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:33:43.935 17:01:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:44.193 17:01:57 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:33:44.193 17:01:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.193 17:01:57 keyring_file -- keyring/file.sh@78 -- # jq length 00:33:44.450 17:01:58 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:33:44.450 17:01:58 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.FjKYbwWSRx 00:33:44.450 17:01:58 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FjKYbwWSRx 00:33:44.450 17:01:58 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:44.450 17:01:58 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FjKYbwWSRx 00:33:44.450 17:01:58 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:44.450 17:01:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:44.450 17:01:58 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:44.450 17:01:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:44.450 17:01:58 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FjKYbwWSRx 00:33:44.450 17:01:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FjKYbwWSRx 00:33:44.707 [2024-10-17 17:01:58.344841] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FjKYbwWSRx': 0100660 00:33:44.707 [2024-10-17 17:01:58.344885] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:44.707 request: 00:33:44.707 { 00:33:44.707 "name": "key0", 00:33:44.707 "path": "/tmp/tmp.FjKYbwWSRx", 00:33:44.707 "method": "keyring_file_add_key", 00:33:44.707 "req_id": 1 00:33:44.707 } 00:33:44.707 Got JSON-RPC error response 00:33:44.707 response: 00:33:44.707 { 00:33:44.707 "code": -1, 00:33:44.707 "message": "Operation not permitted" 00:33:44.707 } 00:33:44.707 17:01:58 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:44.707 17:01:58 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:44.707 17:01:58 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:44.707 17:01:58 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:44.707 17:01:58 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.FjKYbwWSRx 00:33:44.707 17:01:58 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FjKYbwWSRx 00:33:44.707 17:01:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FjKYbwWSRx 00:33:44.965 17:01:58 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.FjKYbwWSRx 00:33:44.965 17:01:58 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:33:44.965 17:01:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:44.965 17:01:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:44.965 17:01:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.965 17:01:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.965 17:01:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:45.531 17:01:58 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:33:45.531 17:01:58 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:45.531 17:01:58 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:45.531 17:01:58 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:45.531 17:01:58 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:45.531 17:01:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:45.531 17:01:58 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:45.531 17:01:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:45.531 17:01:58 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:45.531 17:01:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:45.531 [2024-10-17 17:01:59.179172] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FjKYbwWSRx': No such file or directory 00:33:45.531 [2024-10-17 17:01:59.179215] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:45.531 [2024-10-17 17:01:59.179239] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:45.531 [2024-10-17 17:01:59.179252] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:33:45.531 [2024-10-17 17:01:59.179265] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:45.531 [2024-10-17 17:01:59.179277] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:45.531 request: 00:33:45.531 { 00:33:45.531 "name": "nvme0", 00:33:45.531 "trtype": "tcp", 00:33:45.531 "traddr": "127.0.0.1", 00:33:45.531 "adrfam": "ipv4", 00:33:45.531 "trsvcid": "4420", 00:33:45.531 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:45.531 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:45.531 "prchk_reftag": false, 00:33:45.531 "prchk_guard": false, 00:33:45.531 "hdgst": false, 00:33:45.531 "ddgst": false, 00:33:45.531 "psk": "key0", 00:33:45.531 "allow_unrecognized_csi": false, 00:33:45.531 "method": "bdev_nvme_attach_controller", 00:33:45.531 "req_id": 1 00:33:45.531 } 00:33:45.531 Got JSON-RPC error response 00:33:45.531 response: 00:33:45.531 { 00:33:45.531 "code": -19, 00:33:45.531 "message": "No such device" 00:33:45.531 } 00:33:45.531 17:01:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:45.531 17:01:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:45.531 17:01:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:45.531 17:01:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:45.531 17:01:59 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:33:45.531 17:01:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:45.789 17:01:59 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:45.789 17:01:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:45.789 17:01:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:45.789 17:01:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:45.789 17:01:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:45.789 17:01:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:45.789 17:01:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.P5YA3SX2Cn 00:33:45.789 17:01:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:45.789 17:01:59 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:45.789 17:01:59 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:33:45.789 17:01:59 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:33:45.789 17:01:59 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:33:45.789 17:01:59 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:33:45.789 17:01:59 keyring_file -- nvmf/common.sh@731 -- # python - 00:33:46.048 17:01:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.P5YA3SX2Cn 00:33:46.048 17:01:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.P5YA3SX2Cn 00:33:46.048 17:01:59 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.P5YA3SX2Cn 00:33:46.048 17:01:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.P5YA3SX2Cn 00:33:46.048 17:01:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.P5YA3SX2Cn 00:33:46.306 17:01:59 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:46.306 17:01:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:46.564 nvme0n1 00:33:46.564 17:02:00 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:33:46.564 17:02:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:46.564 17:02:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:46.564 17:02:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:46.564 17:02:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:46.564 17:02:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:46.822 17:02:00 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:33:46.822 17:02:00 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:33:46.822 17:02:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:47.080 17:02:00 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:33:47.080 17:02:00 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:33:47.080 17:02:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:47.080 17:02:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:47.080 17:02:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:47.338 17:02:00 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:33:47.338 17:02:00 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:33:47.338 17:02:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:47.338 17:02:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:47.338 17:02:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:47.338 17:02:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:47.338 17:02:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:47.595 17:02:01 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:33:47.595 17:02:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:47.595 17:02:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:47.852 17:02:01 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:33:47.852 17:02:01 keyring_file -- keyring/file.sh@105 -- # jq length 00:33:47.852 17:02:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:48.110 17:02:01 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:33:48.110 17:02:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.P5YA3SX2Cn 00:33:48.110 17:02:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.P5YA3SX2Cn 00:33:48.368 17:02:02 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rLFI9VJAzM 00:33:48.368 17:02:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rLFI9VJAzM 00:33:48.936 17:02:02 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:48.936 17:02:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:49.194 nvme0n1 00:33:49.194 17:02:02 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:33:49.194 17:02:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:49.455 17:02:02 keyring_file -- keyring/file.sh@113 -- # config='{ 00:33:49.455 "subsystems": [ 00:33:49.455 { 00:33:49.455 "subsystem": "keyring", 00:33:49.455 "config": [ 00:33:49.455 { 00:33:49.455 "method": "keyring_file_add_key", 00:33:49.455 "params": { 00:33:49.455 "name": "key0", 00:33:49.455 "path": "/tmp/tmp.P5YA3SX2Cn" 00:33:49.455 } 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "method": "keyring_file_add_key", 00:33:49.455 "params": { 00:33:49.455 "name": "key1", 00:33:49.455 "path": "/tmp/tmp.rLFI9VJAzM" 00:33:49.455 } 00:33:49.455 } 00:33:49.455 ] 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "subsystem": "iobuf", 00:33:49.455 "config": [ 00:33:49.455 { 00:33:49.455 "method": "iobuf_set_options", 00:33:49.455 "params": { 00:33:49.455 "small_pool_count": 8192, 00:33:49.455 "large_pool_count": 1024, 00:33:49.455 "small_bufsize": 8192, 00:33:49.455 "large_bufsize": 135168 00:33:49.455 } 00:33:49.455 } 00:33:49.455 ] 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "subsystem": "sock", 00:33:49.455 "config": [ 00:33:49.455 { 00:33:49.455 "method": "sock_set_default_impl", 00:33:49.455 "params": { 00:33:49.455 "impl_name": "posix" 00:33:49.455 } 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "method": "sock_impl_set_options", 00:33:49.455 "params": { 00:33:49.455 "impl_name": "ssl", 00:33:49.455 "recv_buf_size": 4096, 00:33:49.455 "send_buf_size": 4096, 00:33:49.455 "enable_recv_pipe": true, 00:33:49.455 "enable_quickack": false, 00:33:49.455 "enable_placement_id": 0, 00:33:49.455 "enable_zerocopy_send_server": true, 00:33:49.455 "enable_zerocopy_send_client": false, 00:33:49.455 "zerocopy_threshold": 0, 00:33:49.455 "tls_version": 0, 00:33:49.455 "enable_ktls": false 00:33:49.455 } 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "method": "sock_impl_set_options", 00:33:49.455 "params": { 00:33:49.455 "impl_name": "posix", 00:33:49.455 "recv_buf_size": 2097152, 00:33:49.455 "send_buf_size": 2097152, 00:33:49.455 "enable_recv_pipe": true, 00:33:49.455 "enable_quickack": false, 00:33:49.455 "enable_placement_id": 0, 00:33:49.455 "enable_zerocopy_send_server": true, 00:33:49.455 "enable_zerocopy_send_client": false, 00:33:49.455 "zerocopy_threshold": 0, 00:33:49.455 "tls_version": 0, 00:33:49.455 "enable_ktls": false 00:33:49.455 } 00:33:49.455 } 00:33:49.455 ] 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "subsystem": "vmd", 00:33:49.455 "config": [] 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "subsystem": "accel", 00:33:49.455 "config": [ 00:33:49.455 { 00:33:49.455 "method": "accel_set_options", 00:33:49.455 "params": { 00:33:49.455 "small_cache_size": 128, 00:33:49.455 "large_cache_size": 16, 00:33:49.455 "task_count": 2048, 00:33:49.455 "sequence_count": 2048, 00:33:49.455 "buf_count": 2048 00:33:49.455 } 00:33:49.455 } 00:33:49.455 ] 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "subsystem": "bdev", 00:33:49.455 "config": [ 00:33:49.455 { 00:33:49.455 "method": "bdev_set_options", 00:33:49.455 "params": { 00:33:49.455 "bdev_io_pool_size": 65535, 00:33:49.455 "bdev_io_cache_size": 256, 00:33:49.455 "bdev_auto_examine": true, 00:33:49.455 "iobuf_small_cache_size": 128, 00:33:49.455 "iobuf_large_cache_size": 16 00:33:49.455 } 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "method": "bdev_raid_set_options", 00:33:49.455 "params": { 00:33:49.455 "process_window_size_kb": 1024, 00:33:49.455 "process_max_bandwidth_mb_sec": 0 00:33:49.455 } 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "method": "bdev_iscsi_set_options", 00:33:49.455 "params": { 00:33:49.455 "timeout_sec": 30 00:33:49.455 } 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "method": "bdev_nvme_set_options", 00:33:49.455 "params": { 00:33:49.455 "action_on_timeout": "none", 00:33:49.455 "timeout_us": 0, 00:33:49.455 "timeout_admin_us": 0, 00:33:49.455 "keep_alive_timeout_ms": 10000, 00:33:49.455 "arbitration_burst": 0, 00:33:49.455 "low_priority_weight": 0, 00:33:49.455 "medium_priority_weight": 0, 00:33:49.455 "high_priority_weight": 0, 00:33:49.455 "nvme_adminq_poll_period_us": 10000, 00:33:49.455 "nvme_ioq_poll_period_us": 0, 00:33:49.455 "io_queue_requests": 512, 00:33:49.455 "delay_cmd_submit": true, 00:33:49.455 "transport_retry_count": 4, 00:33:49.455 "bdev_retry_count": 3, 00:33:49.455 "transport_ack_timeout": 0, 00:33:49.455 "ctrlr_loss_timeout_sec": 0, 00:33:49.455 "reconnect_delay_sec": 0, 00:33:49.455 "fast_io_fail_timeout_sec": 0, 00:33:49.455 "disable_auto_failback": false, 00:33:49.455 "generate_uuids": false, 00:33:49.455 "transport_tos": 0, 00:33:49.455 "nvme_error_stat": false, 00:33:49.455 "rdma_srq_size": 0, 00:33:49.455 "io_path_stat": false, 00:33:49.455 "allow_accel_sequence": false, 00:33:49.455 "rdma_max_cq_size": 0, 00:33:49.455 "rdma_cm_event_timeout_ms": 0, 00:33:49.455 "dhchap_digests": [ 00:33:49.455 "sha256", 00:33:49.455 "sha384", 00:33:49.455 "sha512" 00:33:49.455 ], 00:33:49.455 "dhchap_dhgroups": [ 00:33:49.455 "null", 00:33:49.455 "ffdhe2048", 00:33:49.455 "ffdhe3072", 00:33:49.455 "ffdhe4096", 00:33:49.455 "ffdhe6144", 00:33:49.455 "ffdhe8192" 00:33:49.455 ] 00:33:49.455 } 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "method": "bdev_nvme_attach_controller", 00:33:49.455 "params": { 00:33:49.455 "name": "nvme0", 00:33:49.455 "trtype": "TCP", 00:33:49.455 "adrfam": "IPv4", 00:33:49.455 "traddr": "127.0.0.1", 00:33:49.455 "trsvcid": "4420", 00:33:49.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:49.455 "prchk_reftag": false, 00:33:49.455 "prchk_guard": false, 00:33:49.455 "ctrlr_loss_timeout_sec": 0, 00:33:49.455 "reconnect_delay_sec": 0, 00:33:49.455 "fast_io_fail_timeout_sec": 0, 00:33:49.455 "psk": "key0", 00:33:49.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:49.455 "hdgst": false, 00:33:49.455 "ddgst": false, 00:33:49.455 "multipath": "multipath" 00:33:49.455 } 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "method": "bdev_nvme_set_hotplug", 00:33:49.455 "params": { 00:33:49.455 "period_us": 100000, 00:33:49.455 "enable": false 00:33:49.455 } 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "method": "bdev_wait_for_examine" 00:33:49.455 } 00:33:49.455 ] 00:33:49.455 }, 00:33:49.455 { 00:33:49.455 "subsystem": "nbd", 00:33:49.455 "config": [] 00:33:49.455 } 00:33:49.455 ] 00:33:49.455 }' 00:33:49.455 17:02:02 keyring_file -- keyring/file.sh@115 -- # killprocess 2550993 00:33:49.455 17:02:02 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2550993 ']' 00:33:49.455 17:02:02 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2550993 00:33:49.456 17:02:02 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:49.456 17:02:02 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:49.456 17:02:02 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2550993 00:33:49.456 17:02:03 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:49.456 17:02:03 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:49.456 17:02:03 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2550993' 00:33:49.456 killing process with pid 2550993 00:33:49.456 17:02:03 keyring_file -- common/autotest_common.sh@969 -- # kill 2550993 00:33:49.456 Received shutdown signal, test time was about 1.000000 seconds 00:33:49.456 00:33:49.456 Latency(us) 00:33:49.456 [2024-10-17T15:02:03.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.456 [2024-10-17T15:02:03.146Z] =================================================================================================================== 00:33:49.456 [2024-10-17T15:02:03.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:49.456 17:02:03 keyring_file -- common/autotest_common.sh@974 -- # wait 2550993 00:33:49.714 17:02:03 keyring_file -- keyring/file.sh@118 -- # bperfpid=2552468 00:33:49.714 17:02:03 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2552468 /var/tmp/bperf.sock 00:33:49.714 17:02:03 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:49.714 17:02:03 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2552468 ']' 00:33:49.714 17:02:03 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:49.714 17:02:03 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:33:49.714 "subsystems": [ 00:33:49.714 { 00:33:49.714 "subsystem": "keyring", 00:33:49.714 "config": [ 00:33:49.714 { 00:33:49.714 "method": "keyring_file_add_key", 00:33:49.714 "params": { 00:33:49.714 "name": "key0", 00:33:49.714 "path": "/tmp/tmp.P5YA3SX2Cn" 00:33:49.714 } 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "method": "keyring_file_add_key", 00:33:49.714 "params": { 00:33:49.714 "name": "key1", 00:33:49.714 "path": "/tmp/tmp.rLFI9VJAzM" 00:33:49.714 } 00:33:49.714 } 00:33:49.714 ] 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "subsystem": "iobuf", 00:33:49.714 "config": [ 00:33:49.714 { 00:33:49.714 "method": "iobuf_set_options", 00:33:49.714 "params": { 00:33:49.714 "small_pool_count": 8192, 00:33:49.714 "large_pool_count": 1024, 00:33:49.714 "small_bufsize": 8192, 00:33:49.714 "large_bufsize": 135168 00:33:49.714 } 00:33:49.714 } 00:33:49.714 ] 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "subsystem": "sock", 00:33:49.714 "config": [ 00:33:49.714 { 00:33:49.714 "method": "sock_set_default_impl", 00:33:49.714 "params": { 00:33:49.714 "impl_name": "posix" 00:33:49.714 } 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "method": "sock_impl_set_options", 00:33:49.714 "params": { 00:33:49.714 "impl_name": "ssl", 00:33:49.714 "recv_buf_size": 4096, 00:33:49.714 "send_buf_size": 4096, 00:33:49.714 "enable_recv_pipe": true, 00:33:49.714 "enable_quickack": false, 00:33:49.714 "enable_placement_id": 0, 00:33:49.714 "enable_zerocopy_send_server": true, 00:33:49.714 "enable_zerocopy_send_client": false, 00:33:49.714 "zerocopy_threshold": 0, 00:33:49.714 "tls_version": 0, 00:33:49.714 "enable_ktls": false 00:33:49.714 } 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "method": "sock_impl_set_options", 00:33:49.714 "params": { 00:33:49.714 "impl_name": "posix", 00:33:49.714 "recv_buf_size": 2097152, 00:33:49.714 "send_buf_size": 2097152, 00:33:49.714 "enable_recv_pipe": true, 00:33:49.714 "enable_quickack": false, 00:33:49.714 "enable_placement_id": 0, 00:33:49.714 "enable_zerocopy_send_server": true, 00:33:49.714 "enable_zerocopy_send_client": false, 00:33:49.714 "zerocopy_threshold": 0, 00:33:49.714 "tls_version": 0, 00:33:49.714 "enable_ktls": false 00:33:49.714 } 00:33:49.714 } 00:33:49.714 ] 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "subsystem": "vmd", 00:33:49.714 "config": [] 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "subsystem": "accel", 00:33:49.714 "config": [ 00:33:49.714 { 00:33:49.714 "method": "accel_set_options", 00:33:49.714 "params": { 00:33:49.714 "small_cache_size": 128, 00:33:49.714 "large_cache_size": 16, 00:33:49.714 "task_count": 2048, 00:33:49.714 "sequence_count": 2048, 00:33:49.714 "buf_count": 2048 00:33:49.714 } 00:33:49.714 } 00:33:49.714 ] 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "subsystem": "bdev", 00:33:49.714 "config": [ 00:33:49.714 { 00:33:49.714 "method": "bdev_set_options", 00:33:49.714 "params": { 00:33:49.714 "bdev_io_pool_size": 65535, 00:33:49.714 "bdev_io_cache_size": 256, 00:33:49.714 "bdev_auto_examine": true, 00:33:49.714 "iobuf_small_cache_size": 128, 00:33:49.714 "iobuf_large_cache_size": 16 00:33:49.714 } 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "method": "bdev_raid_set_options", 00:33:49.714 "params": { 00:33:49.714 "process_window_size_kb": 1024, 00:33:49.714 "process_max_bandwidth_mb_sec": 0 00:33:49.714 } 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "method": "bdev_iscsi_set_options", 00:33:49.714 "params": { 00:33:49.714 "timeout_sec": 30 00:33:49.714 } 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "method": "bdev_nvme_set_options", 00:33:49.714 "params": { 00:33:49.714 "action_on_timeout": "none", 00:33:49.714 "timeout_us": 0, 00:33:49.714 "timeout_admin_us": 0, 00:33:49.714 "keep_alive_timeout_ms": 10000, 00:33:49.714 "arbitration_burst": 0, 00:33:49.714 "low_priority_weight": 0, 00:33:49.714 "medium_priority_weight": 0, 00:33:49.714 "high_priority_weight": 0, 00:33:49.714 "nvme_adminq_poll_period_us": 10000, 00:33:49.714 "nvme_ioq_poll_period_us": 0, 00:33:49.714 "io_queue_requests": 512, 00:33:49.714 "delay_cmd_submit": true, 00:33:49.714 "transport_retry_count": 4, 00:33:49.714 "bdev_retry_count": 3, 00:33:49.714 "transport_ack_timeout": 0, 00:33:49.714 "ctrlr_loss_timeout_sec": 0, 00:33:49.714 "reconnect_delay_sec": 0, 00:33:49.714 "fast_io_fail_timeout_sec": 0, 00:33:49.714 "disable_auto_failback": false, 00:33:49.714 "generate_uuids": false, 00:33:49.714 "transport_tos": 0, 00:33:49.714 "nvme_error_stat": false, 00:33:49.714 "rdma_srq_size": 0, 00:33:49.714 "io_path_stat": false, 00:33:49.714 "allow_accel_sequence": false, 00:33:49.714 "rdma_max_cq_size": 0, 00:33:49.714 "rdma_cm_event_timeout_ms": 0, 00:33:49.714 "dhchap_digests": [ 00:33:49.714 "sha256", 00:33:49.714 "sha384", 00:33:49.714 "sha512" 00:33:49.714 ], 00:33:49.714 "dhchap_dhgroups": [ 00:33:49.714 "null", 00:33:49.714 "ffdhe2048", 00:33:49.714 "ffdhe3072", 00:33:49.714 "ffdhe4096", 00:33:49.714 "ffdhe6144", 00:33:49.714 "ffdhe8192" 00:33:49.714 ] 00:33:49.714 } 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "method": "bdev_nvme_attach_controller", 00:33:49.714 "params": { 00:33:49.714 "name": "nvme0", 00:33:49.714 "trtype": "TCP", 00:33:49.714 "adrfam": "IPv4", 00:33:49.714 "traddr": "127.0.0.1", 00:33:49.714 "trsvcid": "4420", 00:33:49.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:49.714 "prchk_reftag": false, 00:33:49.714 "prchk_guard": false, 00:33:49.714 "ctrlr_loss_timeout_sec": 0, 00:33:49.714 "reconnect_delay_sec": 0, 00:33:49.714 "fast_io_fail_timeout_sec": 0, 00:33:49.714 "psk": "key0", 00:33:49.714 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:49.714 "hdgst": false, 00:33:49.714 "ddgst": false, 00:33:49.714 "multipath": "multipath" 00:33:49.714 } 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "method": "bdev_nvme_set_hotplug", 00:33:49.714 "params": { 00:33:49.714 "period_us": 100000, 00:33:49.714 "enable": false 00:33:49.714 } 00:33:49.714 }, 00:33:49.714 { 00:33:49.714 "method": "bdev_wait_for_examine" 00:33:49.714 } 00:33:49.714 ] 00:33:49.714 }, 00:33:49.715 { 00:33:49.715 "subsystem": "nbd", 00:33:49.715 "config": [] 00:33:49.715 } 00:33:49.715 ] 00:33:49.715 }' 00:33:49.715 17:02:03 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:49.715 17:02:03 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:49.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:49.715 17:02:03 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:49.715 17:02:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:49.715 [2024-10-17 17:02:03.261106] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:33:49.715 [2024-10-17 17:02:03.261199] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552468 ] 00:33:49.715 [2024-10-17 17:02:03.318768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.715 [2024-10-17 17:02:03.376799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.973 [2024-10-17 17:02:03.555616] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:50.231 17:02:03 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:50.231 17:02:03 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:50.231 17:02:03 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:33:50.231 17:02:03 keyring_file -- keyring/file.sh@121 -- # jq length 00:33:50.231 17:02:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:50.491 17:02:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:50.491 17:02:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:33:50.491 17:02:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:50.491 17:02:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:50.491 17:02:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:50.491 17:02:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:50.491 17:02:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:50.749 17:02:04 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:33:50.749 17:02:04 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:33:50.749 17:02:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:50.749 17:02:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:50.749 17:02:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:50.749 17:02:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:50.749 17:02:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:51.007 17:02:04 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:33:51.007 17:02:04 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:33:51.007 17:02:04 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:33:51.007 17:02:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:51.265 17:02:04 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:33:51.266 17:02:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:51.266 17:02:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.P5YA3SX2Cn /tmp/tmp.rLFI9VJAzM 00:33:51.266 17:02:04 keyring_file -- keyring/file.sh@20 -- # killprocess 2552468 00:33:51.266 17:02:04 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2552468 ']' 00:33:51.266 17:02:04 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2552468 00:33:51.266 17:02:04 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:51.266 17:02:04 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:51.266 17:02:04 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2552468 00:33:51.266 17:02:04 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:51.266 17:02:04 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:51.266 17:02:04 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2552468' 00:33:51.266 killing process with pid 2552468 00:33:51.266 17:02:04 keyring_file -- common/autotest_common.sh@969 -- # kill 2552468 00:33:51.266 Received shutdown signal, test time was about 1.000000 seconds 00:33:51.266 00:33:51.266 Latency(us) 00:33:51.266 [2024-10-17T15:02:04.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.266 [2024-10-17T15:02:04.956Z] =================================================================================================================== 00:33:51.266 [2024-10-17T15:02:04.956Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:51.266 17:02:04 keyring_file -- common/autotest_common.sh@974 -- # wait 2552468 00:33:51.524 17:02:05 keyring_file -- keyring/file.sh@21 -- # killprocess 2550983 00:33:51.524 17:02:05 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2550983 ']' 00:33:51.524 17:02:05 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2550983 00:33:51.524 17:02:05 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:51.524 17:02:05 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:51.524 17:02:05 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2550983 00:33:51.524 17:02:05 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:51.524 17:02:05 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:51.524 17:02:05 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2550983' 00:33:51.524 killing process with pid 2550983 00:33:51.524 17:02:05 keyring_file -- common/autotest_common.sh@969 -- # kill 2550983 00:33:51.524 17:02:05 keyring_file -- common/autotest_common.sh@974 -- # wait 2550983 00:33:52.090 00:33:52.090 real 0m14.663s 00:33:52.090 user 0m37.320s 00:33:52.090 sys 0m3.237s 00:33:52.090 17:02:05 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:52.090 17:02:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:52.090 ************************************ 00:33:52.090 END TEST keyring_file 00:33:52.090 ************************************ 00:33:52.090 17:02:05 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:33:52.090 17:02:05 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:52.090 17:02:05 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:52.090 17:02:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:52.090 17:02:05 -- common/autotest_common.sh@10 -- # set +x 00:33:52.090 ************************************ 00:33:52.090 START TEST keyring_linux 00:33:52.090 ************************************ 00:33:52.090 17:02:05 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:52.090 Joined session keyring: 522276612 00:33:52.090 * Looking for test storage... 00:33:52.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:52.090 17:02:05 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:52.090 17:02:05 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:33:52.090 17:02:05 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:52.090 17:02:05 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@345 -- # : 1 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@368 -- # return 0 00:33:52.090 17:02:05 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:52.090 17:02:05 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:52.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.090 --rc genhtml_branch_coverage=1 00:33:52.090 --rc genhtml_function_coverage=1 00:33:52.090 --rc genhtml_legend=1 00:33:52.090 --rc geninfo_all_blocks=1 00:33:52.090 --rc geninfo_unexecuted_blocks=1 00:33:52.090 00:33:52.090 ' 00:33:52.090 17:02:05 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:52.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.090 --rc genhtml_branch_coverage=1 00:33:52.090 --rc genhtml_function_coverage=1 00:33:52.090 --rc genhtml_legend=1 00:33:52.090 --rc geninfo_all_blocks=1 00:33:52.090 --rc geninfo_unexecuted_blocks=1 00:33:52.090 00:33:52.090 ' 00:33:52.090 17:02:05 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:52.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.090 --rc genhtml_branch_coverage=1 00:33:52.090 --rc genhtml_function_coverage=1 00:33:52.090 --rc genhtml_legend=1 00:33:52.090 --rc geninfo_all_blocks=1 00:33:52.090 --rc geninfo_unexecuted_blocks=1 00:33:52.090 00:33:52.090 ' 00:33:52.090 17:02:05 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:52.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.090 --rc genhtml_branch_coverage=1 00:33:52.090 --rc genhtml_function_coverage=1 00:33:52.090 --rc genhtml_legend=1 00:33:52.090 --rc geninfo_all_blocks=1 00:33:52.090 --rc geninfo_unexecuted_blocks=1 00:33:52.090 00:33:52.090 ' 00:33:52.090 17:02:05 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:52.090 17:02:05 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.090 17:02:05 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.090 17:02:05 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.090 17:02:05 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.090 17:02:05 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.090 17:02:05 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:52.090 17:02:05 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:52.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:52.090 17:02:05 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:52.090 17:02:05 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:52.090 17:02:05 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:52.090 17:02:05 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:52.090 17:02:05 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:52.090 17:02:05 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:52.091 17:02:05 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:52.091 17:02:05 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:52.091 17:02:05 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:52.091 17:02:05 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:33:52.091 17:02:05 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:33:52.091 17:02:05 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:33:52.091 17:02:05 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:33:52.091 17:02:05 keyring_linux -- nvmf/common.sh@731 -- # python - 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:52.091 /tmp/:spdk-test:key0 00:33:52.091 17:02:05 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:52.091 17:02:05 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:52.091 17:02:05 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:33:52.091 17:02:05 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:33:52.091 17:02:05 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:33:52.091 17:02:05 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:33:52.091 17:02:05 keyring_linux -- nvmf/common.sh@731 -- # python - 00:33:52.091 17:02:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:52.349 17:02:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:52.349 /tmp/:spdk-test:key1 00:33:52.349 17:02:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2552947 00:33:52.349 17:02:05 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:52.349 17:02:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2552947 00:33:52.349 17:02:05 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2552947 ']' 00:33:52.349 17:02:05 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.349 17:02:05 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:52.349 17:02:05 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.349 17:02:05 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:52.349 17:02:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:52.349 [2024-10-17 17:02:05.837007] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:33:52.349 [2024-10-17 17:02:05.837125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552947 ] 00:33:52.349 [2024-10-17 17:02:05.896247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.349 [2024-10-17 17:02:05.954715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.608 17:02:06 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:52.608 17:02:06 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:33:52.608 17:02:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:52.608 17:02:06 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.608 17:02:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:52.608 [2024-10-17 17:02:06.236840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.608 null0 00:33:52.608 [2024-10-17 17:02:06.268890] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:52.608 [2024-10-17 17:02:06.269449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:52.608 17:02:06 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.608 17:02:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:52.608 679891729 00:33:52.608 17:02:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:52.608 784626225 00:33:52.608 17:02:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2552954 00:33:52.608 17:02:06 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:52.608 17:02:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2552954 /var/tmp/bperf.sock 00:33:52.608 17:02:06 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2552954 ']' 00:33:52.608 17:02:06 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:52.608 17:02:06 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:52.608 17:02:06 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:52.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:52.608 17:02:06 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:52.608 17:02:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:52.867 [2024-10-17 17:02:06.337386] Starting SPDK v25.01-pre git sha1 767a69c7c / DPDK 24.03.0 initialization... 00:33:52.867 [2024-10-17 17:02:06.337465] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552954 ] 00:33:52.867 [2024-10-17 17:02:06.397717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.867 [2024-10-17 17:02:06.460335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.125 17:02:06 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:53.125 17:02:06 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:33:53.125 17:02:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:53.125 17:02:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:53.382 17:02:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:53.382 17:02:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:53.639 17:02:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:53.639 17:02:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:53.897 [2024-10-17 17:02:07.451729] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:53.897 nvme0n1 00:33:53.897 17:02:07 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:53.897 17:02:07 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:53.897 17:02:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:53.897 17:02:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:53.897 17:02:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:53.897 17:02:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:54.154 17:02:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:54.154 17:02:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:54.154 17:02:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:54.154 17:02:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:54.154 17:02:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:54.154 17:02:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:54.154 17:02:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:54.411 17:02:08 keyring_linux -- keyring/linux.sh@25 -- # sn=679891729 00:33:54.411 17:02:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:54.411 17:02:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:54.411 17:02:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 679891729 == \6\7\9\8\9\1\7\2\9 ]] 00:33:54.411 17:02:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 679891729 00:33:54.411 17:02:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:54.411 17:02:08 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:54.669 Running I/O for 1 seconds... 00:33:55.603 9520.00 IOPS, 37.19 MiB/s 00:33:55.603 Latency(us) 00:33:55.603 [2024-10-17T15:02:09.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.603 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:55.603 nvme0n1 : 1.01 9527.61 37.22 0.00 0.00 13340.96 9514.86 23107.51 00:33:55.603 [2024-10-17T15:02:09.293Z] =================================================================================================================== 00:33:55.603 [2024-10-17T15:02:09.293Z] Total : 9527.61 37.22 0.00 0.00 13340.96 9514.86 23107.51 00:33:55.603 { 00:33:55.603 "results": [ 00:33:55.603 { 00:33:55.603 "job": "nvme0n1", 00:33:55.603 "core_mask": "0x2", 00:33:55.603 "workload": "randread", 00:33:55.603 "status": "finished", 00:33:55.603 "queue_depth": 128, 00:33:55.603 "io_size": 4096, 00:33:55.603 "runtime": 1.012741, 00:33:55.603 "iops": 9527.608737080853, 00:33:55.603 "mibps": 37.21722162922208, 00:33:55.603 "io_failed": 0, 00:33:55.603 "io_timeout": 0, 00:33:55.603 "avg_latency_us": 13340.95797146509, 00:33:55.603 "min_latency_us": 9514.856296296297, 00:33:55.603 "max_latency_us": 23107.508148148147 00:33:55.603 } 00:33:55.603 ], 00:33:55.603 "core_count": 1 00:33:55.603 } 00:33:55.603 17:02:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:55.603 17:02:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:55.860 17:02:09 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:55.860 17:02:09 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:55.860 17:02:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:55.860 17:02:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:55.860 17:02:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:55.860 17:02:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:56.118 17:02:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:56.118 17:02:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:56.118 17:02:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:56.118 17:02:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:56.118 17:02:09 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:33:56.118 17:02:09 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:56.118 17:02:09 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:56.118 17:02:09 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:56.118 17:02:09 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:56.118 17:02:09 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:56.118 17:02:09 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:56.118 17:02:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:56.377 [2024-10-17 17:02:10.025206] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:56.377 [2024-10-17 17:02:10.025875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0a160 (107): Transport endpoint is not connected 00:33:56.377 [2024-10-17 17:02:10.026864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0a160 (9): Bad file descriptor 00:33:56.377 [2024-10-17 17:02:10.027862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:56.377 [2024-10-17 17:02:10.027886] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:56.377 [2024-10-17 17:02:10.027902] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:56.377 [2024-10-17 17:02:10.027919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:56.377 request: 00:33:56.377 { 00:33:56.377 "name": "nvme0", 00:33:56.377 "trtype": "tcp", 00:33:56.377 "traddr": "127.0.0.1", 00:33:56.377 "adrfam": "ipv4", 00:33:56.377 "trsvcid": "4420", 00:33:56.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:56.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:56.377 "prchk_reftag": false, 00:33:56.377 "prchk_guard": false, 00:33:56.377 "hdgst": false, 00:33:56.377 "ddgst": false, 00:33:56.377 "psk": ":spdk-test:key1", 00:33:56.377 "allow_unrecognized_csi": false, 00:33:56.377 "method": "bdev_nvme_attach_controller", 00:33:56.377 "req_id": 1 00:33:56.377 } 00:33:56.377 Got JSON-RPC error response 00:33:56.377 response: 00:33:56.377 { 00:33:56.377 "code": -5, 00:33:56.377 "message": "Input/output error" 00:33:56.377 } 00:33:56.377 17:02:10 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:33:56.377 17:02:10 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:56.377 17:02:10 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:56.377 17:02:10 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@33 -- # sn=679891729 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 679891729 00:33:56.377 1 links removed 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@33 -- # sn=784626225 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 784626225 00:33:56.377 1 links removed 00:33:56.377 17:02:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2552954 00:33:56.377 17:02:10 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2552954 ']' 00:33:56.377 17:02:10 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2552954 00:33:56.377 17:02:10 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:33:56.377 17:02:10 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:56.377 17:02:10 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2552954 00:33:56.637 17:02:10 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:56.637 17:02:10 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:56.637 17:02:10 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2552954' 00:33:56.637 killing process with pid 2552954 00:33:56.637 17:02:10 keyring_linux -- common/autotest_common.sh@969 -- # kill 2552954 00:33:56.637 Received shutdown signal, test time was about 1.000000 seconds 00:33:56.637 00:33:56.637 Latency(us) 00:33:56.637 [2024-10-17T15:02:10.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.637 [2024-10-17T15:02:10.327Z] =================================================================================================================== 00:33:56.637 [2024-10-17T15:02:10.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:56.637 17:02:10 keyring_linux -- common/autotest_common.sh@974 -- # wait 2552954 00:33:56.637 17:02:10 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2552947 00:33:56.637 17:02:10 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2552947 ']' 00:33:56.637 17:02:10 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2552947 00:33:56.637 17:02:10 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:33:56.637 17:02:10 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:56.637 17:02:10 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2552947 00:33:56.895 17:02:10 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:56.895 17:02:10 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:56.895 17:02:10 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2552947' 00:33:56.895 killing process with pid 2552947 00:33:56.895 17:02:10 keyring_linux -- common/autotest_common.sh@969 -- # kill 2552947 00:33:56.895 17:02:10 keyring_linux -- common/autotest_common.sh@974 -- # wait 2552947 00:33:57.153 00:33:57.153 real 0m5.256s 00:33:57.153 user 0m10.347s 00:33:57.153 sys 0m1.622s 00:33:57.153 17:02:10 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:57.153 17:02:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:57.153 ************************************ 00:33:57.153 END TEST keyring_linux 00:33:57.153 ************************************ 00:33:57.153 17:02:10 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:33:57.153 17:02:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:57.153 17:02:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:57.153 17:02:10 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:33:57.153 17:02:10 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:33:57.153 17:02:10 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:33:57.153 17:02:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:57.153 17:02:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:57.153 17:02:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:57.153 17:02:10 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:57.153 17:02:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:57.153 17:02:10 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:57.153 17:02:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:57.153 17:02:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:57.153 17:02:10 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:57.153 17:02:10 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:57.153 17:02:10 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:57.153 17:02:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:57.153 17:02:10 -- common/autotest_common.sh@10 -- # set +x 00:33:57.153 17:02:10 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:57.153 17:02:10 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:57.153 17:02:10 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:57.153 17:02:10 -- common/autotest_common.sh@10 -- # set +x 00:33:59.682 INFO: APP EXITING 00:33:59.682 INFO: killing all VMs 00:33:59.682 INFO: killing vhost app 00:33:59.682 INFO: EXIT DONE 00:34:00.247 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:34:00.247 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:34:00.247 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:34:00.247 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:34:00.247 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:34:00.247 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:34:00.504 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:34:00.504 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:34:00.504 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:34:00.504 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:34:00.504 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:34:00.504 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:34:00.504 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:34:00.504 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:34:00.504 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:34:00.504 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:34:00.504 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:34:01.879 Cleaning 00:34:01.879 Removing: /var/run/dpdk/spdk0/config 00:34:01.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:01.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:01.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:01.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:01.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:01.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:01.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:01.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:01.879 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:01.879 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:01.880 Removing: /var/run/dpdk/spdk1/config 00:34:01.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:01.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:01.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:01.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:01.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:01.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:01.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:01.880 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:01.880 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:01.880 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:01.880 Removing: /var/run/dpdk/spdk2/config 00:34:01.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:01.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:01.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:01.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:01.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:01.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:01.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:01.880 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:01.880 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:01.880 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:01.880 Removing: /var/run/dpdk/spdk3/config 00:34:01.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:01.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:01.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:01.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:01.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:01.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:01.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:01.880 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:01.880 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:01.880 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:01.880 Removing: /var/run/dpdk/spdk4/config 00:34:01.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:01.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:01.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:01.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:01.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:01.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:01.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:01.880 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:01.880 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:01.880 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:01.880 Removing: /dev/shm/bdev_svc_trace.1 00:34:01.880 Removing: /dev/shm/nvmf_trace.0 00:34:01.880 Removing: /dev/shm/spdk_tgt_trace.pid2226175 00:34:01.880 Removing: /var/run/dpdk/spdk0 00:34:01.880 Removing: /var/run/dpdk/spdk1 00:34:01.880 Removing: /var/run/dpdk/spdk2 00:34:01.880 Removing: /var/run/dpdk/spdk3 00:34:01.880 Removing: /var/run/dpdk/spdk4 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2223944 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2224750 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2226175 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2226527 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2227220 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2227360 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2228075 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2228207 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2228470 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2229674 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2230595 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2230907 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2231111 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2231362 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2231633 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2231790 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2231948 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2232149 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2232453 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2234827 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2235112 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2235272 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2235278 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2235709 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2235720 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2236151 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2236154 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2236447 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2236452 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2236622 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2236752 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2237129 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2237282 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2237603 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2239722 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2242374 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2249380 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2249788 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2252319 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2252592 00:34:01.880 Removing: /var/run/dpdk/spdk_pid2255119 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2259588 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2261664 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2268075 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2273306 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2274624 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2275247 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2285548 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2287855 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2315769 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2319070 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2322903 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2326766 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2326768 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2327421 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2327983 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2328616 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2329020 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2329028 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2329285 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2329475 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2329489 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2330169 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2331232 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2331898 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2332298 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2332421 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2332562 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2333456 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2334303 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2339524 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2368409 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2371326 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2372504 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2373826 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2373969 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2374107 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2374254 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2374692 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2376014 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2376755 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2377190 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2378911 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2379836 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2380284 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2382675 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2386089 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2386090 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2386091 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2388303 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2393029 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2395697 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2399456 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2400404 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2401514 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2402580 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2405433 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2407676 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2411918 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2412038 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2414815 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2414956 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2415090 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2415479 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2415484 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2418242 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2418691 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2421868 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2423725 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2427162 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2430489 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2444799 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2449156 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2449159 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2462618 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2463594 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2464058 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2464470 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2465050 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2465455 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2465871 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2466394 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2468780 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2469043 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2472859 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2472918 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2476286 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2478895 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2485697 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2486098 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2488604 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2488877 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2491379 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2495069 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2497851 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2504106 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2509301 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2510600 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2511205 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2521327 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2523579 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2525592 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2530725 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2530747 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2534169 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2535686 00:34:02.139 Removing: /var/run/dpdk/spdk_pid2537083 00:34:02.396 Removing: /var/run/dpdk/spdk_pid2537953 00:34:02.396 Removing: /var/run/dpdk/spdk_pid2539355 00:34:02.396 Removing: /var/run/dpdk/spdk_pid2540106 00:34:02.396 Removing: /var/run/dpdk/spdk_pid2545512 00:34:02.397 Removing: /var/run/dpdk/spdk_pid2545902 00:34:02.397 Removing: /var/run/dpdk/spdk_pid2546303 00:34:02.397 Removing: /var/run/dpdk/spdk_pid2547853 00:34:02.397 Removing: /var/run/dpdk/spdk_pid2548134 00:34:02.397 Removing: /var/run/dpdk/spdk_pid2548532 00:34:02.397 Removing: /var/run/dpdk/spdk_pid2550983 00:34:02.397 Removing: /var/run/dpdk/spdk_pid2550993 00:34:02.397 Removing: /var/run/dpdk/spdk_pid2552468 00:34:02.397 Removing: /var/run/dpdk/spdk_pid2552947 00:34:02.397 Removing: /var/run/dpdk/spdk_pid2552954 00:34:02.397 Clean 00:34:02.397 17:02:15 -- common/autotest_common.sh@1451 -- # return 0 00:34:02.397 17:02:15 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:34:02.397 17:02:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:02.397 17:02:15 -- common/autotest_common.sh@10 -- # set +x 00:34:02.397 17:02:15 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:34:02.397 17:02:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:02.397 17:02:15 -- common/autotest_common.sh@10 -- # set +x 00:34:02.397 17:02:15 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:02.397 17:02:15 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:02.397 17:02:15 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:02.397 17:02:15 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:34:02.397 17:02:15 -- spdk/autotest.sh@394 -- # hostname 00:34:02.397 17:02:15 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:02.654 geninfo: WARNING: invalid characters removed from testname! 00:34:41.396 17:02:50 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:41.396 17:02:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:44.703 17:02:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:47.984 17:03:01 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:50.513 17:03:04 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:53.793 17:03:07 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:57.075 17:03:10 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:57.075 17:03:10 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:34:57.075 17:03:10 -- common/autotest_common.sh@1691 -- $ lcov --version 00:34:57.075 17:03:10 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:34:57.075 17:03:10 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:34:57.075 17:03:10 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:34:57.075 17:03:10 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:34:57.075 17:03:10 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:34:57.075 17:03:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:34:57.075 17:03:10 -- scripts/common.sh@336 -- $ read -ra ver1 00:34:57.075 17:03:10 -- scripts/common.sh@337 -- $ IFS=.-: 00:34:57.075 17:03:10 -- scripts/common.sh@337 -- $ read -ra ver2 00:34:57.075 17:03:10 -- scripts/common.sh@338 -- $ local 'op=<' 00:34:57.075 17:03:10 -- scripts/common.sh@340 -- $ ver1_l=2 00:34:57.075 17:03:10 -- scripts/common.sh@341 -- $ ver2_l=1 00:34:57.075 17:03:10 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:34:57.075 17:03:10 -- scripts/common.sh@344 -- $ case "$op" in 00:34:57.075 17:03:10 -- scripts/common.sh@345 -- $ : 1 00:34:57.075 17:03:10 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:34:57.075 17:03:10 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:57.075 17:03:10 -- scripts/common.sh@365 -- $ decimal 1 00:34:57.075 17:03:10 -- scripts/common.sh@353 -- $ local d=1 00:34:57.075 17:03:10 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:34:57.075 17:03:10 -- scripts/common.sh@355 -- $ echo 1 00:34:57.075 17:03:10 -- scripts/common.sh@365 -- $ ver1[v]=1 00:34:57.075 17:03:10 -- scripts/common.sh@366 -- $ decimal 2 00:34:57.075 17:03:10 -- scripts/common.sh@353 -- $ local d=2 00:34:57.075 17:03:10 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:34:57.075 17:03:10 -- scripts/common.sh@355 -- $ echo 2 00:34:57.075 17:03:10 -- scripts/common.sh@366 -- $ ver2[v]=2 00:34:57.075 17:03:10 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:34:57.075 17:03:10 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:34:57.075 17:03:10 -- scripts/common.sh@368 -- $ return 0 00:34:57.075 17:03:10 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:57.075 17:03:10 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:34:57.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.075 --rc genhtml_branch_coverage=1 00:34:57.075 --rc genhtml_function_coverage=1 00:34:57.075 --rc genhtml_legend=1 00:34:57.075 --rc geninfo_all_blocks=1 00:34:57.075 --rc geninfo_unexecuted_blocks=1 00:34:57.075 00:34:57.075 ' 00:34:57.075 17:03:10 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:34:57.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.075 --rc genhtml_branch_coverage=1 00:34:57.075 --rc genhtml_function_coverage=1 00:34:57.075 --rc genhtml_legend=1 00:34:57.075 --rc geninfo_all_blocks=1 00:34:57.075 --rc geninfo_unexecuted_blocks=1 00:34:57.075 00:34:57.075 ' 00:34:57.075 17:03:10 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:34:57.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.075 --rc genhtml_branch_coverage=1 00:34:57.075 --rc genhtml_function_coverage=1 00:34:57.075 --rc genhtml_legend=1 00:34:57.075 --rc geninfo_all_blocks=1 00:34:57.075 --rc geninfo_unexecuted_blocks=1 00:34:57.075 00:34:57.075 ' 00:34:57.075 17:03:10 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:34:57.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.075 --rc genhtml_branch_coverage=1 00:34:57.075 --rc genhtml_function_coverage=1 00:34:57.075 --rc genhtml_legend=1 00:34:57.075 --rc geninfo_all_blocks=1 00:34:57.075 --rc geninfo_unexecuted_blocks=1 00:34:57.075 00:34:57.075 ' 00:34:57.075 17:03:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:57.075 17:03:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:34:57.075 17:03:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:57.075 17:03:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:57.075 17:03:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:57.075 17:03:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.075 17:03:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.076 17:03:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.076 17:03:10 -- paths/export.sh@5 -- $ export PATH 00:34:57.076 17:03:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.076 17:03:10 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:57.076 17:03:10 -- common/autobuild_common.sh@486 -- $ date +%s 00:34:57.076 17:03:10 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729177390.XXXXXX 00:34:57.076 17:03:10 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729177390.4m0GYp 00:34:57.076 17:03:10 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:34:57.076 17:03:10 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:34:57.076 17:03:10 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:57.076 17:03:10 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:57.076 17:03:10 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:57.076 17:03:10 -- common/autobuild_common.sh@502 -- $ get_config_params 00:34:57.076 17:03:10 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:34:57.076 17:03:10 -- common/autotest_common.sh@10 -- $ set +x 00:34:57.076 17:03:10 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:57.076 17:03:10 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:34:57.076 17:03:10 -- pm/common@17 -- $ local monitor 00:34:57.076 17:03:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:57.076 17:03:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:57.076 17:03:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:57.076 17:03:10 -- pm/common@21 -- $ date +%s 00:34:57.076 17:03:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:57.076 17:03:10 -- pm/common@21 -- $ date +%s 00:34:57.076 17:03:10 -- pm/common@25 -- $ sleep 1 00:34:57.076 17:03:10 -- pm/common@21 -- $ date +%s 00:34:57.076 17:03:10 -- pm/common@21 -- $ date +%s 00:34:57.076 17:03:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729177390 00:34:57.076 17:03:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729177390 00:34:57.076 17:03:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729177390 00:34:57.076 17:03:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729177390 00:34:57.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729177390_collect-vmstat.pm.log 00:34:57.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729177390_collect-cpu-load.pm.log 00:34:57.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729177390_collect-cpu-temp.pm.log 00:34:57.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729177390_collect-bmc-pm.bmc.pm.log 00:34:58.010 17:03:11 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:34:58.010 17:03:11 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:34:58.010 17:03:11 -- spdk/autopackage.sh@14 -- $ timing_finish 00:34:58.010 17:03:11 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:58.010 17:03:11 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:58.010 17:03:11 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:58.010 17:03:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:58.010 17:03:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:58.010 17:03:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:58.010 17:03:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:58.010 17:03:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:58.011 17:03:11 -- pm/common@44 -- $ pid=2564264 00:34:58.011 17:03:11 -- pm/common@50 -- $ kill -TERM 2564264 00:34:58.011 17:03:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:58.011 17:03:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:58.011 17:03:11 -- pm/common@44 -- $ pid=2564266 00:34:58.011 17:03:11 -- pm/common@50 -- $ kill -TERM 2564266 00:34:58.011 17:03:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:58.011 17:03:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:58.011 17:03:11 -- pm/common@44 -- $ pid=2564268 00:34:58.011 17:03:11 -- pm/common@50 -- $ kill -TERM 2564268 00:34:58.011 17:03:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:58.011 17:03:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:58.011 17:03:11 -- pm/common@44 -- $ pid=2564299 00:34:58.011 17:03:11 -- pm/common@50 -- $ sudo -E kill -TERM 2564299 00:34:58.011 + [[ -n 2153502 ]] 00:34:58.011 + sudo kill 2153502 00:34:58.021 [Pipeline] } 00:34:58.037 [Pipeline] // stage 00:34:58.043 [Pipeline] } 00:34:58.058 [Pipeline] // timeout 00:34:58.064 [Pipeline] } 00:34:58.079 [Pipeline] // catchError 00:34:58.085 [Pipeline] } 00:34:58.100 [Pipeline] // wrap 00:34:58.107 [Pipeline] } 00:34:58.121 [Pipeline] // catchError 00:34:58.130 [Pipeline] stage 00:34:58.133 [Pipeline] { (Epilogue) 00:34:58.146 [Pipeline] catchError 00:34:58.148 [Pipeline] { 00:34:58.161 [Pipeline] echo 00:34:58.163 Cleanup processes 00:34:58.170 [Pipeline] sh 00:34:58.460 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:58.460 2564451 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:58.460 2564576 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:58.475 [Pipeline] sh 00:34:58.763 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:58.763 ++ grep -v 'sudo pgrep' 00:34:58.763 ++ awk '{print $1}' 00:34:58.763 + sudo kill -9 2564451 00:34:58.775 [Pipeline] sh 00:34:59.060 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:09.039 [Pipeline] sh 00:35:09.327 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:09.328 Artifacts sizes are good 00:35:09.345 [Pipeline] archiveArtifacts 00:35:09.353 Archiving artifacts 00:35:09.539 [Pipeline] sh 00:35:09.893 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:09.910 [Pipeline] cleanWs 00:35:09.921 [WS-CLEANUP] Deleting project workspace... 00:35:09.921 [WS-CLEANUP] Deferred wipeout is used... 00:35:09.929 [WS-CLEANUP] done 00:35:09.931 [Pipeline] } 00:35:09.949 [Pipeline] // catchError 00:35:09.961 [Pipeline] sh 00:35:10.244 + logger -p user.info -t JENKINS-CI 00:35:10.251 [Pipeline] } 00:35:10.263 [Pipeline] // stage 00:35:10.268 [Pipeline] } 00:35:10.281 [Pipeline] // node 00:35:10.286 [Pipeline] End of Pipeline 00:35:10.322 Finished: SUCCESS